diff --git a/src/administration/troubleshooting.md b/src/administration/troubleshooting.md index 45aeac6..19604a9 100644 --- a/src/administration/troubleshooting.md +++ b/src/administration/troubleshooting.md @@ -88,3 +88,105 @@ You will see a table like the following: | mastodon.coloradocrest.net | 6837196 | 6837196 | 0 | 1970-01-01 00:00:00+00 | This will show you exactly which instances are up to date or not. + +You can also use this website which will help you monitor this without admin rights, and also let others see it + +https://phiresky.github.io/lemmy-federation-state/site + +### You don't receive actions reliably + +Due to the lemmy queue, remove lemmy instances will be sending apub sync actions serially to you. If your server rate of processing them is slower than the rate the origin server is sending them, when visiting the [lemmy-federation-state](https://phiresky.github.io/lemmy-federation-state/site) for the remote server, you'll see your instance in the "lagging behind" section. + +Typically the speed at which you process an incoming action should be less than 100ms. If this is higher, this might signify problems with your database performance or your networking setup. + +Note that even apub action ingestion speed which seems sufficient for most other instances, might become insufficient if the origin server is receiving actions from their userbase faster than you can process them. I.e. if the origin server receives 10 actions per second, but you can only process 8 actions per second, you'll inevitably start falling behind that one server only. + +These steps might help you diagnose this. + +#### Check processing time on the loadbalancer + +Check how long a request takes to process on the backend. In haproxy for example, the following command will show you the time it takes for apub actions to complete + +```bash +tail -f /var/log/haproxy.log | grep "POST \/inbox" +``` + +[See here for nginx](https://www.nginx.com/blog/using-nginx-logging-for-application-performance-monitoring/) + +If these actions take more than 100ms, you might want to investigate deeper. + +#### Check your Database performance + +Ensure that it's not very high in CPU or RAM utilization. + +Afterwards check for slow queries. If you regularly see common queries with high max and mean exec time, it might signify your database is struggling. The below SQL query will show you all queries (you will need `pg_stat_statements` [enabled](https://www.postgresql.org/docs/current/pgstatstatements.html)) + +```sql +\x auto +SELECT user,query,max_exec_time,mean_exec_time,calls FROM pg_stat_statements WHERE max_exec_time > 10 AND CALLS > 100 ORDER BY max_exec_time DESC; +``` + +If you see very high time on inserts, you might want to consider disabling `synchronous_commit` to see if this helps. + +#### Check your backend performance. + +Like the DB, if the server where your lemmy rust backend is running is overloaded, you might see such an impact + +#### Check your Network Layout + +If your backend and database appear to be in good condition, it might be that your issue is network based. + +One problem can occur is your backend and your database are not in the same server and are too far from each other in geographic location. Due to the amount of DB queries performed for each apub sync request, even a small amount of latency can quickly add up. + +Check the latency between your rust backend and your DB using ping + +```bash +ping your_database_ip +``` + +if the `time` you see if above 1-2ms, this can start causing such delays. In that case, you might want to consider moving your backend closer to your DB geographically, so that your latency is below 2ms + +Note that your external loadbalancer(s) (if any) do not necessarily need to be closer to the DB, as they do not do multiple small DB requests. + +## Downgrading + +If you upgraded your instance to a newer version (by mistake or planned) and need to downgrade it. Often you need to reverse database changes as well. + +First you need to figure out what SQL changes happened between your upgraded version, and the one you're downgrading. Then in that diff, check which files were added in the `migrations` dir. + +Let's say that for the migration you're doing, the following were added + +``` +2023-10-24-131607_proxy_links +2023-10-27-142514_post_url_content_type +2023-12-19-210053_tolerable-batch-insert-speed +2023-12-22-040137_make-mixed-sorting-directions-work-with-tuple-comparison +2024-01-05-213000_community_aggregates_add_local_subscribers +2024-01-15-100133_local-only-community +2024-01-22-105746_lemmynsfw-changes +2024-01-25-151400_remove_auto_resolve_report_trigger +2024-02-15-171358_default_instance_sort_type +2024-02-27-204628_add_post_alt_text +2024-02-28-144211_hide_posts +``` + +Each of these folders contains a `down.sql` file. We need to run that against our postgresql DB to rollback those DB changes. + +1. Stop your lemmy backend, and take a backup of your DB. +1. Copy the `migrations` folder to your DB container or server +1. Acquire a shell in your postgresql container or server and switch to the `postgres` user +1. Run each relevant script with this command + ```bash + downfolder=2024-02-28-144211_hide_posts + psql -d lemmy -a -f /path/to/migrations/${downfolder}/down.sql + ``` + Alternatively, copy the content of the file and paste into a psql session +1. You now need to clean the `__diesel_schema_migrations` table from the migration records, so that they will be correctly applied the next time you upgrade. You can use this command to sort them + ```sql + select * from __diesel_schema_migrations ORDER BY run_on ASC; + ``` + You have to delete the entries in that table which match the current timestamp you applied them (This should typically be any time in the past few minutes) + ```sql + delete from __diesel_schema_migrations where version='20240228144211'; + ``` +1. You should now be able to start your lemmy in the previous version diff --git a/src/contributors/03-docker-development.md b/src/contributors/03-docker-development.md index f6789c5..7d49f7f 100644 --- a/src/contributors/03-docker-development.md +++ b/src/contributors/03-docker-development.md @@ -23,6 +23,68 @@ Get the code with submodules: git clone https://github.com/LemmyNet/lemmy --recursive ``` +### Building + +Use these commands to create a custom container based on your local branch and tagged accordingly. + +This is useful if you want to modify the source code of your instance to add some extra functionalities which are not available in the main release. + +```bash +sudo docker build . -f docker/Dockerfile --build-arg RUST_RELEASE_MODE=release -t "lemmy:${git rev-parse --abbrev-ref HEAD}" +``` + +#### Build Troubleshooting + +In case the build fails, the following might help resolve it + +##### Translations missing + +If you see an error like this + +``` +Error: FileRead { file: "translations/email/en.json", source: Os { code: 2, kind: NotFound, message: "No such file or directory" } } +``` + +Try these commands + +```bash +git submodule init && git submodule update +``` + +Then try building again + +### Running custom build on your server + +If you want a custom docker build to run on your instance via docker, you don't need to upload to a container repository, you can upload directly from your PC through ssh. + +The following commands will copy the file to your instance and then load it onto your server's container registry + +```bash +LEMMY_SRV=lemmy.example.com # Add the FQDN, IP or hostname of your lemmy server here +# We store in /tmp to avoid putting it in our local branch and committing it by mistake +sudo docker save -o /tmp/customlemmy.tar lemmy:${git rev-parse --abbrev-ref HEAD} +# We change permissios to allow our normal user to read the file as root might not have ssh keys +sudo chown ${whoami} /tmp/${git rev-parse --abbrev-ref HEAD} +scp /tmp/customlemmy.tar ${LEMMY_SRV}: +ssh ${LEMMY_SRV} +# This command should be run while in your lemmy server as the user you uploaded +sudo docker load -i ${HOME}/customlemmy.tar +``` + +After the container is in your registry, simply change the docker-compose to have your own tag in the `image` key + +``` +image: lemmy:your_branch_name +``` + +Finally, reinitiate the container + +``` +docker-compose up -d +``` + +You should now be running your custom docker container. + ### Running ```bash