Minor improvements to markdown guide #21

Closed
nutomic wants to merge 0 commits from nutomic:markdown-guide into master
260 changed files with 10745 additions and 34868 deletions

10
.dockerignore vendored
View File

@ -1,12 +1,4 @@
# build folders and similar which are not needed for the docker build
ui/node_modules
ui/dist
server/target
docker/dev/volumes
docker/federation/volumes
docker/federation-test/volumes
.git
ansible
# exceptions, needed for federation-test build
!server/target/debug/lemmy_server

View File

@ -1,27 +0,0 @@
---
name: "\U0001F41E Bug Report"
about: Create a report to help us improve Lemmy
title: ''
labels: bug
assignees: ''
---
Found a bug? Please fill out the sections below. 👍
### Issue Summary
A summary of the bug.
### Steps to Reproduce
1. (for example) I clicked login, and an endless spinner show up.
2. I tried to install lemmy via this guide, and I'm getting this error.
3. ...
### Technical details
* Please post your log: `sudo docker-compose logs > lemmy_log.out`.
* What OS are you trying to install lemmy on?
* Any browser console errors?

View File

@ -1,42 +0,0 @@
---
name: "\U0001F680 Feature request"
about: Suggest an idea for improving Lemmy
title: ''
labels: enhancement
assignees: ''
---
### Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
(Write your answer here.)
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
(Describe your proposed solution here.)
### Describe alternatives you've considered
<!--
Let us know about other solutions you've tried or researched.
-->
(Write your answer here.)
### Additional context
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here, if you haven't already.
-->
(Write your answer here.)

View File

@ -1,10 +0,0 @@
---
name: "? Question"
about: General questions about Lemmy
title: ''
labels: question
assignees: ''
---
What's the question you have about lemmy?

11
.gitignore vendored
View File

@ -1,18 +1,7 @@
# local ansible configuration
ansible/inventory
ansible/inventory_dev
ansible/passwords/
# docker build files
docker/lemmy_mine.hjson
docker/dev/env_deploy.sh
docker/federation/volumes
docker/federation-test/volumes
docker/dev/volumes
# local build files
build/
ui/src/translations
# ide config
.idea/

4
.travis.yml vendored
View File

@ -12,6 +12,7 @@ before_cache:
- rm -rfv target/debug/build/lemmy_server-*
- rm -rfv target/debug/deps/lemmy_server-*
- rm -rfv target/debug/lemmy_server.d
- cargo clean
before_script:
- psql -c "create user lemmy with password 'password' superuser;" -U postgres
- psql -c 'create database lemmy with owner lemmy;' -U postgres
@ -24,11 +25,10 @@ script:
- cargo clippy -- -D clippy::style -D clippy::correctness -D clippy::complexity -D clippy::perf
- cargo install diesel_cli --no-default-features --features postgres --force
- diesel migration run
- cargo test --workspace
- cargo test
env:
global:
- DATABASE_URL=postgres://lemmy:password@localhost:5432/lemmy
- LEMMY_DATABASE_URL=postgres://lemmy:password@localhost:5432/lemmy
- RUST_TEST_THREADS=1
addons:

4
CODE_OF_CONDUCT.md vendored
View File

@ -19,7 +19,7 @@ These are the policies for upholding our communitys standards of conduct. If
1. Remarks that violate the Lemmy standards of conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
3. Moderators will first respond to such remarks with a warning, at the same time the offending content will likely be removed whenever possible.
3. Moderators will first respond to such remarks with a warning.
4. If the warning is unheeded, the user will be “kicked,” i.e., kicked out of the communication channel to cool off.
5. If the user comes back and continues to make trouble, they will be banned, i.e., indefinitely excluded.
6. Moderators may choose at their discretion to un-ban the user if it was a first offense and they offer the offended party a genuine apology.
@ -30,6 +30,6 @@ In the Lemmy community we strive to go the extra step to look out for each other
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you couldve communicated better — remember that its your responsibility to make others comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust.
The enforcement policies listed above apply to all official Lemmy venues; including git repositories under [github.com/LemmyNet/lemmy](https://github.com/LemmyNet/lemmy) and [yerbamate.dev/LemmyNet/lemmy](https://yerbamate.dev/LemmyNet/lemmy), the [Matrix channel](https://matrix.to/#/!BZVTUuEiNmRcbFeLeI:matrix.org?via=matrix.org&via=privacytools.io&via=permaweb.io); and all instances under lemmy.ml. For other projects adopting the Rust Code of Conduct, please contact the maintainers of those projects for enforcement. If you wish to use this code of conduct for your own project, consider explicitly mentioning your moderation policy or making a copy with your own moderation policy so as to avoid confusion.
The enforcement policies listed above apply to all official Lemmy venues; including git repositories under [github.com/dessalines/lemmy](https://github.com/dessalines/lemmy) and [yerbamate.dev/dessalines/lemmy](https://yerbamate.dev/dessalines/lemmy), the [Matrix channel](https://matrix.to/#/!BZVTUuEiNmRcbFeLeI:matrix.org?via=matrix.org&via=privacytools.io&via=permaweb.io); and all instances under lemmy.ml. For other projects adopting the Rust Code of Conduct, please contact the maintainers of those projects for enforcement. If you wish to use this code of conduct for your own project, consider explicitly mentioning your moderation policy or making a copy with your own moderation policy so as to avoid confusion.
Adapted from the [Rust Code of Conduct](https://www.rust-lang.org/policies/code-of-conduct), which is based on the [Node.js Policy on Trolling](http://blog.izs.me/post/30036893703/policy-on-trolling) as well as the [Contributor Covenant v1.3.0](https://www.contributor-covenant.org/version/1/3/0/).

80
README.md vendored
View File

@ -1,12 +1,11 @@
<div align="center">
![GitHub tag (latest SemVer)](https://img.shields.io/github/tag/LemmyNet/lemmy.svg)
[![Build Status](https://travis-ci.org/LemmyNet/lemmy.svg?branch=master)](https://travis-ci.org/LemmyNet/lemmy)
[![GitHub issues](https://img.shields.io/github/issues-raw/LemmyNet/lemmy.svg)](https://github.com/LemmyNet/lemmy/issues)
![GitHub tag (latest SemVer)](https://img.shields.io/github/tag/dessalines/lemmy.svg)
[![Build Status](https://travis-ci.org/dessalines/lemmy.svg?branch=master)](https://travis-ci.org/dessalines/lemmy)
[![GitHub issues](https://img.shields.io/github/issues-raw/dessalines/lemmy.svg)](https://github.com/dessalines/lemmy/issues)
[![Docker Pulls](https://img.shields.io/docker/pulls/dessalines/lemmy.svg)](https://cloud.docker.com/repository/docker/dessalines/lemmy/)
[![Translation status](http://weblate.yerbamate.dev/widgets/lemmy/-/lemmy/svg-badge.svg)](http://weblate.yerbamate.dev/engage/lemmy/)
[![License](https://img.shields.io/github/license/LemmyNet/lemmy.svg)](LICENSE)
![GitHub stars](https://img.shields.io/github/stars/LemmyNet/lemmy?style=social)
[![License](https://img.shields.io/github/license/dessalines/lemmy.svg)](LICENSE)
![GitHub stars](https://img.shields.io/github/stars/dessalines/lemmy?style=social)
</div>
<p align="center">
@ -15,18 +14,18 @@
<h3 align="center"><a href="https://dev.lemmy.ml">Lemmy</a></h3>
<p align="center">
A link aggregator / Reddit clone for the fediverse.
A link aggregator / reddit clone for the fediverse.
<br />
<br />
<a href="https://dev.lemmy.ml">View Site</a>
·
<a href="https://dev.lemmy.ml/docs/index.html">Documentation</a>
·
<a href="https://github.com/LemmyNet/lemmy/issues">Report Bug</a>
<a href="https://github.com/dessalines/lemmy/issues">Report Bug</a>
·
<a href="https://github.com/LemmyNet/lemmy/issues">Request Feature</a>
<a href="https://github.com/dessalines/lemmy/issues">Request Feature</a>
·
<a href="https://github.com/LemmyNet/lemmy/blob/master/RELEASES.md">Releases</a>
<a href="https://github.com/dessalines/lemmy/blob/master/RELEASES.md">Releases</a>
</p>
</p>
@ -34,17 +33,17 @@
Front Page|Post
---|---
![main screen](https://raw.githubusercontent.com/LemmyNet/lemmy/master/docs/img/main_screen.png)|![chat screen](https://raw.githubusercontent.com/LemmyNet/lemmy/master/docs/img/chat_screen.png)
![main screen](https://i.imgur.com/kZSRcRu.png)|![chat screen](https://i.imgur.com/4XghNh6.png)
[Lemmy](https://github.com/LemmyNet/lemmy) is similar to sites like [Reddit](https://reddit.com), [Lobste.rs](https://lobste.rs), [Raddle](https://raddle.me), or [Hacker News](https://news.ycombinator.com/): you subscribe to forums you're interested in, post links and discussions, then vote, and comment on them. Behind the scenes, it is very different; anyone can easily run a server, and all these servers are federated (think email), and connected to the same universe, called the [Fediverse](https://en.wikipedia.org/wiki/Fediverse).
[Lemmy](https://github.com/dessalines/lemmy) is similar to sites like [Reddit](https://reddit.com), [Lobste.rs](https://lobste.rs), [Raddle](https://raddle.me), or [Hacker News](https://news.ycombinator.com/): you subscribe to forums you're interested in, post links and discussions, then vote, and comment on them. Behind the scenes, it is very different; anyone can easily run a server, and all these servers are federated (think email), and connected to the same universe, called the [Fediverse](https://en.wikipedia.org/wiki/Fediverse).
For a link aggregator, this means a user registered on one server can subscribe to forums on any other server, and can have discussions with users registered elsewhere.
The overall goal is to create an easily self-hostable, decentralized alternative to Reddit and other link aggregators, outside of their corporate control and meddling.
The overall goal is to create an easily self-hostable, decentralized alternative to reddit and other link aggregators, outside of their corporate control and meddling.
Each Lemmy server can set its own moderation policy; appointing site-wide admins, and community moderators to keep out the trolls, and foster a healthy, non-toxic environment where all can feel comfortable contributing.
Each lemmy server can set its own moderation policy; appointing site-wide admins, and community moderators to keep out the trolls, and foster a healthy, non-toxic environment where all can feel comfortable contributing.
*Note: Federation is still in active development and the WebSocket, as well as, HTTP API are currently unstable*
*Note: Federation is still in active development*
### Why's it called Lemmy?
@ -70,15 +69,14 @@ Each Lemmy server can set its own moderation policy; appointing site-wide admins
- Only a minimum of a username and password is required to sign up!
- User avatar support.
- Live-updating Comment threads.
- Full vote scores `(+/-)` like old Reddit.
- Full vote scores `(+/-)` like old reddit.
- Themes, including light, dark, and solarized.
- Emojis with autocomplete support. Start typing `:`
- User tagging using `@`, Community tagging using `!`.
- User tagging using `@`, Community tagging using `#`.
- Integrated image uploading in both posts and comments.
- A post can consist of a title and any combination of self text, a URL, or nothing else.
- Notifications, on comment replies and when you're tagged.
- Notifications can be sent via email.
- Private messaging support.
- i18n / internationalization support.
- RSS / Atom feeds for `All`, `Subscribed`, `Inbox`, `User`, and `Community`.
- Cross-posting support.
@ -108,9 +106,8 @@ Each Lemmy server can set its own moderation policy; appointing site-wide admins
Lemmy is free, open-source software, meaning no advertising, monetizing, or venture capital, ever. Your donations directly support full-time development of the project.
- [Support on Liberapay](https://liberapay.com/Lemmy).
- [Support on Liberapay.](https://liberapay.com/Lemmy)
- [Support on Patreon](https://www.patreon.com/dessalines).
- [Support on OpenCollective](https://opencollective.com/lemmy).
- [List of Sponsors](https://dev.lemmy.ml/sponsors).
### Crypto
@ -125,19 +122,44 @@ Lemmy is free, open-source software, meaning no advertising, monetizing, or vent
- [Docker Development](https://dev.lemmy.ml/docs/contributing_docker_development.html)
- [Local Development](https://dev.lemmy.ml/docs/contributing_local_development.html)
### Translations
### Translations
If you want to help with translating, take a look at [Weblate](https://weblate.yerbamate.dev/projects/lemmy/).
If you'd like to add translations, take a look at the [English translation file](ui/src/translations/en.ts).
- Languages supported: Brazilian Portuguese (`pt-br`), Catalan, (`ca`), Farsi (`fa`), English (`en`), Chinese (`zh`), Dutch (`nl`), Esperanto (`eo`), Finnish (`fi`), French (`fr`), Spanish (`es`), Swedish (`sv`), German (`de`), Russian (`ru`), Italian (`it`).
<!-- translations -->
lang | done | missing
---- | ---- | -------
ca | 97% | cross_posted_to,old,support_on_liberapay,couldnt_get_comments,post_title_too_long,time,action
de | 86% | cross_posted_to,create_private_message,send_secure_message,send_message,message,avatar,upload_avatar,show_avatars,old,docs,message_sent,messages,old_password,matrix_user_id,private_message_disclaimer,send_notifications_to_email,downvotes_disabled,enable_downvotes,open_registration,registration_closed,enable_nsfw,donate_to_lemmy,donate,from,logged_in,couldnt_get_comments,post_title_too_long,email_already_exists,couldnt_create_private_message,no_private_message_edit_allowed,couldnt_update_private_message,time,action
fa | 71% | cross_post,cross_posted_to,subscribed_to_communities,trending_communities,create_private_message,send_secure_message,send_message,message,mod,mods,moderates,remove_as_mod,appoint_as_mod,modlog,stickied,ban,ban_from_site,unban,unban_from_site,banned,number_of_subscribers,subscribers,both,saved,unsubscribe,subscribe,subscribed,old,api,docs,inbox,inbox_for,message_sent,notifications_error,messages,no_email_setup,matrix_user_id,private_message_disclaimer,url,body,copy_suggested_title,community,expand_here,subscribe_to_communities,theme,sponsor_message,support_on_liberapay,general_sponsors,joined,by,to,from,landing_0,logged_in,couldnt_get_comments,community_moderator_already_exists,community_follower_already_exists,community_user_already_banned,post_title_too_long,no_slurs,admin_already_created,couldnt_create_private_message,no_private_message_edit_allowed,couldnt_update_private_message,time,action
eo | 73% | cross_posted_to,number_of_communities,create_private_message,send_secure_message,send_message,message,preview,upload_image,avatar,upload_avatar,show_avatars,formatting_help,view_source,sticky,unsticky,archive_link,stickied,delete_account,delete_account_confirm,banned,creator,number_online,old,docs,replies,mentions,message_sent,messages,old_password,forgot_password,reset_password_mail_sent,password_change,new_password,no_email_setup,matrix_user_id,private_message_disclaimer,send_notifications_to_email,language,browser_default,downvotes_disabled,enable_downvotes,open_registration,registration_closed,enable_nsfw,theme,support_on_liberapay,donate_to_lemmy,donate,from,are_you_sure,yes,no,logged_in,couldnt_get_comments,post_title_too_long,email_already_exists,couldnt_create_private_message,no_private_message_edit_allowed,couldnt_update_private_message,time,action
es | 99% | cross_posted_to,couldnt_get_comments,post_title_too_long
fi | 97% | cross_posted_to,old,support_on_liberapay,couldnt_get_comments,post_title_too_long,time,action
fr | 100% | upload_avatar,show_avatars
it | 82% | cross_posted_to,create_private_message,send_secure_message,send_message,message,avatar,upload_avatar,show_avatars,archive_link,old,docs,message_sent,messages,old_password,forgot_password,reset_password_mail_sent,password_change,new_password,no_email_setup,matrix_user_id,private_message_disclaimer,send_notifications_to_email,language,browser_default,downvotes_disabled,enable_downvotes,open_registration,registration_closed,enable_nsfw,donate_to_lemmy,donate,from,logged_in,couldnt_get_comments,post_title_too_long,email_already_exists,couldnt_create_private_message,no_private_message_edit_allowed,couldnt_update_private_message,time,action
nl | 98% | cross_posted_to,couldnt_get_comments,post_title_too_long,time,action
pt-br | 99% | couldnt_get_comments,post_title_too_long
ru | 70% | cross_posts,cross_post,cross_posted_to,number_of_communities,create_private_message,send_secure_message,send_message,message,preview,upload_image,avatar,upload_avatar,show_avatars,formatting_help,view_source,sticky,unsticky,archive_link,stickied,delete_account,delete_account_confirm,banned,creator,number_online,old,docs,replies,mentions,message_sent,messages,old_password,forgot_password,reset_password_mail_sent,password_change,new_password,no_email_setup,matrix_user_id,private_message_disclaimer,send_notifications_to_email,language,browser_default,downvotes_disabled,enable_downvotes,open_registration,registration_closed,enable_nsfw,recent_comments,theme,support_on_liberapay,donate_to_lemmy,donate,monero,by,to,from,transfer_community,transfer_site,are_you_sure,yes,no,logged_in,couldnt_get_comments,post_title_too_long,email_already_exists,couldnt_create_private_message,no_private_message_edit_allowed,couldnt_update_private_message,time,action
sv | 81% | cross_posted_to,create_private_message,send_secure_message,send_message,message,avatar,upload_avatar,show_avatars,archive_link,old,docs,replies,mentions,message_sent,messages,old_password,forgot_password,reset_password_mail_sent,password_change,new_password,no_email_setup,matrix_user_id,private_message_disclaimer,send_notifications_to_email,language,browser_default,downvotes_disabled,enable_downvotes,open_registration,registration_closed,enable_nsfw,support_on_liberapay,donate_to_lemmy,donate,from,logged_in,couldnt_get_comments,post_title_too_long,email_already_exists,couldnt_create_private_message,no_private_message_edit_allowed,couldnt_update_private_message,time,action
zh | 69% | cross_posts,cross_post,cross_posted_to,users,number_of_communities,create_private_message,send_secure_message,send_message,message,preview,upload_image,avatar,upload_avatar,show_avatars,formatting_help,view_source,sticky,unsticky,archive_link,settings,stickied,delete_account,delete_account_confirm,banned,creator,number_online,old,docs,replies,mentions,message_sent,messages,old_password,forgot_password,reset_password_mail_sent,password_change,new_password,no_email_setup,matrix_user_id,private_message_disclaimer,send_notifications_to_email,language,browser_default,downvotes_disabled,enable_downvotes,open_registration,registration_closed,enable_nsfw,recent_comments,nsfw,show_nsfw,theme,donate_to_lemmy,donate,monero,by,to,from,transfer_community,transfer_site,are_you_sure,yes,no,logged_in,couldnt_get_comments,post_title_too_long,email_already_exists,couldnt_create_private_message,no_private_message_edit_allowed,couldnt_update_private_message,time,action
<!-- translationsstop -->
If you'd like to update this report, run:
```bash
cd ui
ts-node translation_report.ts
```
## Contact
- [Mastodon](https://mastodon.social/@LemmyDev)
- [Matrix](https://riot.im/app/#/room/#rust-reddit-fediverse:matrix.org)
## Code Mirrors
- [GitHub](https://github.com/LemmyNet/lemmy)
- [Gitea](https://yerbamate.dev/LemmyNet/lemmy)
- [Mastodon](https://mastodon.social/@LemmyDev) - [![Mastodon Follow](https://img.shields.io/mastodon/follow/810572?domain=https%3A%2F%2Fmastodon.social&style=social)](https://mastodon.social/@LemmyDev)
- [Matrix](https://riot.im/app/#/room/#rust-reddit-fediverse:matrix.org) - [![Matrix](https://img.shields.io/matrix/rust-reddit-fediverse:matrix.org.svg?label=matrix-chat)](https://riot.im/app/#/room/#rust-reddit-fediverse:matrix.org)
- [GitHub](https://github.com/dessalines/lemmy)
- [Gitea](https://yerbamate.dev/dessalines/lemmy)
- [GitLab](https://gitlab.com/dessalines/lemmy)
## Credits

67
RELEASES.md vendored
View File

@ -1,69 +1,6 @@
# Lemmy v0.7.0 Release (2020-06-23)
This release replaces [pictshare](https://github.com/HaschekSolutions/pictshare)
with [pict-rs](https://git.asonix.dog/asonix/pict-rs), which improves performance
and security.
Overall, since our last major release in January (v0.6.0), we have closed over
[100 issues!](https://github.com/LemmyNet/lemmy/milestone/16?closed=1)
- Site-wide list of recent comments
- Reconnecting websockets
- Many more themes, including a default light one.
- Expandable embeds for post links (and thumbnails), from
[iframely](https://github.com/itteco/iframely)
- Better icons
- Emoji autocomplete to post and message bodies, and an Emoji Picker
- Post body now searchable
- Community title and description is now searchable
- Simplified cross-posts
- Better documentation
- LOTS more languages
- Lots of bugs squashed
- And more ...
## Upgrading
Before starting the upgrade, make sure that you have a working backup of your
database and image files. See our
[documentation](https://dev.lemmy.ml/docs/administration_backup_and_restore.html)
for backup instructions.
**With Ansible:**
```
# deploy with ansible from your local lemmy git repo
git pull
cd ansible
ansible-playbook lemmy.yml
# connect via ssh to run the migration script
ssh your-server
cd /lemmy/
wget https://raw.githubusercontent.com/LemmyNet/lemmy/master/docker/prod/migrate-pictshare-to-pictrs.bash
chmod +x migrate-pictshare-to-pictrs.bash
sudo ./migrate-pictshare-to-pictrs.bash
```
**With manual Docker installation:**
```
# run these commands on your server
cd /lemmy
wget https://raw.githubusercontent.com/LemmyNet/lemmy/master/ansible/templates/nginx.conf
# Replace the {{ vars }}
sudo mv nginx.conf /etc/nginx/sites-enabled/lemmy.conf
sudo nginx -s reload
wget https://raw.githubusercontent.com/LemmyNet/lemmy/master/docker/prod/docker-compose.yml
wget https://raw.githubusercontent.com/LemmyNet/lemmy/master/docker/prod/migrate-pictshare-to-pictrs.bash
chmod +x migrate-pictshare-to-pictrs.bash
sudo bash migrate-pictshare-to-pictrs.bash
```
**Note:** After upgrading, all users need to reload the page, then logout and
login again, so that images are loaded correctly.
# Lemmy v0.6.0 Release (2020-01-16)
`v0.6.0` is here, and we've closed [41 issues!](https://github.com/LemmyNet/lemmy/milestone/15?closed=1)
`v0.6.0` is here, and we've closed [41 issues!](https://github.com/dessalines/lemmy/milestone/15?closed=1)
This is the biggest release by far:
@ -73,7 +10,7 @@ This is the biggest release by far:
- Can set a custom language.
- Lemmy-wide settings to disable downvotes, and close registration.
- A better documentation system, hosted in lemmy itself.
- [Huge DB performance gains](https://github.com/LemmyNet/lemmy/issues/411) (everthing down to < `30ms`) by using materialized views.
- [Huge DB performance gains](https://github.com/dessalines/lemmy/issues/411) (everthing down to < `30ms`) by using materialized views.
- Fixed major issue with similar post URL and title searching.
- Upgraded to Actix `2.0`
- Faster comment / post voting.

2
ansible/VERSION vendored
View File

@ -1 +1 @@
v0.7.21
v0.6.22

3
ansible/ansible.cfg vendored
View File

@ -1,6 +1,5 @@
[defaults]
inventory = inventory
interpreter_python = /usr/bin/python3
inventory=inventory
[ssh_connection]
pipelining = True

View File

@ -1,12 +1,6 @@
[lemmy]
# to get started, copy this file to `inventory` and adjust the values below.
# - `myuser@example.com`: replace with the destination you use to connect to your server via ssh
# - `domain=example.com`: replace `example.com` with your lemmy domain
# - `letsencrypt_contact_email=your@email.com` replace `your@email.com` with your email address,
# to get notifications if your ssl cert expires
# - `lemmy_base_dir=/srv/lemmy`: the location on the server where lemmy can be installed, can be any folder
# if you are upgrading from a previous version, set this to `/lemmy`
myuser@example.com domain=example.com letsencrypt_contact_email=your@email.com lemmy_base_dir=/srv/lemmy
# define the username and hostname that you use for ssh connection, and specify the domain
myuser@example.com domain=example.com letsencrypt_contact_email=your@email.com
[all:vars]
ansible_connection=ssh

82
ansible/lemmy.yml vendored
View File

@ -5,41 +5,18 @@
# https://www.josharcher.uk/code/ansible-python-connection-failure-ubuntu-server-1604/
gather_facts: False
pre_tasks:
- name: check lemmy_base_dir
fail:
msg: "`lemmy_base_dir` is unset. if you are upgrading from an older version, add `lemmy_base_dir=/lemmy` to your inventory file."
when: lemmy_base_dir is not defined
- name: install python for Ansible
# python2-minimal instead of python-minimal for ubuntu 20.04 and up
raw: test -e /usr/bin/python || (apt -y update && apt install -y python-minimal python-setuptools)
args:
executable: /bin/bash
register: output
changed_when: output.stdout != ''
changed_when: output.stdout != ""
- setup: # gather facts
tasks:
- name: install dependencies
apt:
pkg:
- 'nginx'
- 'docker-compose'
- 'docker.io'
- 'certbot'
- name: install certbot-nginx on ubuntu < 20
apt:
pkg:
- 'python-certbot-nginx'
when: ansible_distribution == 'Ubuntu' and ansible_distribution_version is version('20.04', '<')
- name: install certbot-nginx on ubuntu > 20
apt:
pkg:
- 'python3-certbot-nginx'
when: ansible_distribution == 'Ubuntu' and ansible_distribution_version is version('20.04', '>=')
pkg: ['nginx', 'docker-compose', 'docker.io', 'certbot', 'python-certbot-nginx']
- name: request initial letsencrypt certificate
command: certbot certonly --nginx --agree-tos -d '{{ domain }}' -m '{{ letsencrypt_contact_email }}'
@ -47,51 +24,25 @@
creates: '/etc/letsencrypt/live/{{domain}}/privkey.pem'
- name: create lemmy folder
file:
path: '{{item.path}}'
owner: '{{item.owner}}'
state: directory
file: path={{item.path}} state=directory
with_items:
- path: '{{lemmy_base_dir}}'
owner: 'root'
- path: '{{lemmy_base_dir}}/volumes/'
owner: 'root'
- path: '{{lemmy_base_dir}}/volumes/pictrs/'
owner: '991'
- { path: '/lemmy/' }
- { path: '/lemmy/volumes/' }
- block:
- name: add template files
template:
src: '{{item.src}}'
dest: '{{item.dest}}'
mode: '{{item.mode}}'
template: src={{item.src}} dest={{item.dest}} mode={{item.mode}}
with_items:
- src: 'templates/docker-compose.yml'
dest: '{{lemmy_base_dir}}/docker-compose.yml'
mode: '0600'
- src: 'templates/nginx.conf'
dest: '/etc/nginx/sites-enabled/lemmy.conf'
mode: '0644'
- src: '../docker/iframely.config.local.js'
dest: '{{lemmy_base_dir}}/iframely.config.local.js'
mode: '0600'
vars:
lemmy_docker_image: "dessalines/lemmy:{{ lookup('file', 'VERSION') }}"
lemmy_port: "8536"
pictshare_port: "8537"
iframely_port: "8538"
- { src: 'templates/docker-compose.yml', dest: '/lemmy/docker-compose.yml', mode: '0600' }
- { src: 'templates/nginx.conf', dest: '/etc/nginx/sites-enabled/lemmy.conf', mode: '0644' }
- { src: '../docker/iframely.config.local.js', dest: '/lemmy/iframely.config.local.js', mode: '0600' }
- name: add config file (only during initial setup)
template:
src: 'templates/config.hjson'
dest: '{{lemmy_base_dir}}/lemmy.hjson'
mode: '0600'
force: false
owner: '1000'
group: '1000'
template: src='templates/config.hjson' dest='/lemmy/lemmy.hjson' mode='0600' force='no' owner='1000' group='1000'
vars:
postgres_password: "{{ lookup('password', 'passwords/{{ inventory_hostname }}/postgres chars=ascii_letters,digits') }}"
jwt_password: "{{ lookup('password', 'passwords/{{ inventory_hostname }}/jwt chars=ascii_letters,digits') }}"
lemmy_docker_image: "dessalines/lemmy:{{ lookup('file', 'VERSION') }}"
- name: enable and start docker service
systemd:
@ -101,17 +52,16 @@
- name: start docker-compose
docker_compose:
project_src: '{{lemmy_base_dir}}'
project_src: /lemmy/
state: present
pull: yes
remove_orphans: yes
- name: reload nginx with new config
shell: nginx -s reload
- name: certbot renewal cronjob
cron:
special_time: daily
name: certbot-renew-lemmy
user: root
job: "certbot certonly --nginx -d '{{ domain }}' --deploy-hook 'nginx -s reload'"
special_time=daily
name=certbot-renew-lemmy
user=root
job="certbot certonly --nginx -d '{{ domain }}' --deploy-hook 'nginx -s reload'"

86
ansible/lemmy_dev.yml vendored
View File

@ -1,34 +1,24 @@
---
- hosts: all
vars:
lemmy_docker_image: 'lemmy:dev'
lemmy_docker_image: "lemmy:dev"
# Install python if required
# https://www.josharcher.uk/code/ansible-python-connection-failure-ubuntu-server-1604/
gather_facts: False
pre_tasks:
- name: check lemmy_base_dir
fail:
msg: "`lemmy_base_dir` is unset. if you are upgrading from an older version, add `lemmy_base_dir=/lemmy` to your inventory file."
when: lemmy_base_dir is not defined
- name: install python for Ansible
raw: test -e /usr/bin/python || (apt -y update && apt install -y python-minimal python-setuptools)
args:
executable: /bin/bash
register: output
changed_when: output.stdout != ''
changed_when: output.stdout != ""
- setup: # gather facts
tasks:
- name: install dependencies
apt:
pkg:
- 'nginx'
- 'docker-compose'
- 'docker.io'
- 'certbot'
- 'python-certbot-nginx'
pkg: ['nginx', 'docker-compose', 'docker.io', 'certbot', 'python-certbot-nginx']
- name: request initial letsencrypt certificate
command: certbot certonly --nginx --agree-tos -d '{{ domain }}' -m '{{ letsencrypt_contact_email }}'
@ -36,46 +26,24 @@
creates: '/etc/letsencrypt/live/{{domain}}/privkey.pem'
- name: create lemmy folder
file:
path: '{{item.path}}'
owner: '{{item.owner}}'
state: directory
file: path={{item.path}} state=directory
with_items:
- path: '{{lemmy_base_dir}}/lemmy/'
owner: 'root'
- path: '{{lemmy_base_dir}}/volumes/'
owner: 'root'
- path: '{{lemmy_base_dir}}/volumes/pictrs/'
owner: '991'
- { path: '/lemmy/' }
- { path: '/lemmy/volumes/' }
- block:
- name: add template files
template:
src: '{{item.src}}'
dest: '{{item.dest}}'
mode: '{{item.mode}}'
template: src={{item.src}} dest={{item.dest}} mode={{item.mode}}
with_items:
- src: 'templates/docker-compose.yml'
dest: '{{lemmy_base_dir}}/docker-compose.yml'
mode: '0600'
- src: 'templates/nginx.conf'
dest: '/etc/nginx/sites-enabled/lemmy.conf'
mode: '0644'
- src: '../docker/iframely.config.local.js'
dest: '{{lemmy_base_dir}}/iframely.config.local.js'
mode: '0600'
- { src: 'templates/docker-compose.yml', dest: '/lemmy/docker-compose.yml', mode: '0600' }
- { src: 'templates/nginx.conf', dest: '/etc/nginx/sites-enabled/lemmy.conf', mode: '0644' }
- { src: '../docker/iframely.config.local.js', dest: '/lemmy/iframely.config.local.js', mode: '0600' }
- name: add config file (only during initial setup)
template:
src: 'templates/config.hjson'
dest: '{{lemmy_base_dir}}/lemmy.hjson'
mode: '0600'
force: false
owner: '1000'
group: '1000'
vars:
postgres_password: "{{ lookup('password', 'passwords/{{ inventory_hostname }}/postgres chars=ascii_letters,digits') }}"
jwt_password: "{{ lookup('password', 'passwords/{{ inventory_hostname }}/jwt chars=ascii_letters,digits') }}"
template: src='templates/config.hjson' dest='/lemmy/lemmy.hjson' mode='0600' force='no' owner='1000' group='1000'
vars:
postgres_password: "{{ lookup('password', 'passwords/{{ inventory_hostname }}/postgres chars=ascii_letters,digits') }}"
jwt_password: "{{ lookup('password', 'passwords/{{ inventory_hostname }}/jwt chars=ascii_letters,digits') }}"
- name: build the dev docker image
local_action: shell cd .. && sudo docker build . -f docker/dev/Dockerfile -t lemmy:dev
@ -90,29 +58,22 @@
local_action: shell sudo docker save lemmy:dev > lemmy-dev.tar
- name: copy dev docker image to server
copy:
src: lemmy-dev.tar
dest: '{{lemmy_base_dir}}/lemmy-dev.tar'
copy: src=lemmy-dev.tar dest=/lemmy/lemmy-dev.tar
- name: import docker image
docker_image:
name: lemmy
tag: dev
load_path: '{{lemmy_base_dir}}/lemmy-dev.tar'
load_path: /lemmy/lemmy-dev.tar
source: load
force_source: yes
register: image_import
- name: delete remote image file
file:
path: '{{lemmy_base_dir}}/lemmy-dev.tar'
state: absent
file: path=/lemmy/lemmy-dev.tar state=absent
- name: delete local image file
local_action:
module: file
path: lemmy-dev.tar
state: absent
local_action: file path=lemmy-dev.tar state=absent
- name: enable and start docker service
systemd:
@ -124,10 +85,9 @@
# be a problem for testing
- name: start docker-compose
docker_compose:
project_src: '{{lemmy_base_dir}}'
project_src: /lemmy/
state: present
recreate: always
remove_orphans: yes
ignore_errors: yes
- name: reload nginx with new config
@ -135,7 +95,7 @@
- name: certbot renewal cronjob
cron:
special_time: daily
name: certbot-renew-lemmy
user: root
job: "certbot certonly --nginx -d '{{ domain }}' --deploy-hook 'nginx -s reload'"
special_time=daily
name=certbot-renew-lemmy
user=root
job="certbot certonly --nginx -d '{{ domain }}' --deploy-hook 'nginx -s reload'"

View File

@ -1,25 +1,13 @@
{
# for more info about the config, check out the documentation
# https://dev.lemmy.ml/docs/administration_configuration.html
# settings related to the postgresql database
database: {
# password to connect to postgres
password: "{{ postgres_password }}"
# host where postgres is running
host: "postgres"
host: "lemmy_db"
}
# the domain name of your instance (eg "dev.lemmy.ml")
hostname: "{{ domain }}"
# json web token for authorization between server and client
jwt_secret: "{{ jwt_password }}"
# The location of the frontend
front_end_dir: "/app/dist"
# email sending configuration
email: {
# hostname of the smtp server
smtp_server: "postfix:25"
# address to send emails from, eg "noreply@your-instance.com"
smtp_from_address: "noreply@{{ domain }}"
use_tls: false
}

View File

@ -6,38 +6,34 @@ services:
ports:
- "127.0.0.1:8536:8536"
restart: always
environment:
- RUST_LOG=error
volumes:
- ./lemmy.hjson:/config/config.hjson:ro
depends_on:
- postgres
- pictrs
- iframely
- lemmy_db
- lemmy_pictshare
postgres:
lemmy_db:
image: postgres:12-alpine
environment:
- POSTGRES_USER=lemmy
- POSTGRES_PASSWORD={{ postgres_password }}
- POSTGRES_DB=lemmy
volumes:
- ./volumes/postgres:/var/lib/postgresql/data
- lemmy_db:/var/lib/postgresql/data
restart: always
pictrs:
image: asonix/pictrs:amd64-v0.1.0-r9
user: 991:991
lemmy_pictshare:
image: shtripok/pictshare:latest
ports:
- "127.0.0.1:8537:8080"
- "127.0.0.1:8537:80"
volumes:
- ./volumes/pictrs:/mnt
- lemmy_pictshare:/usr/share/nginx/html/data
restart: always
iframely:
lemmy_iframely:
image: dogbin/iframely:latest
ports:
- "127.0.0.1:8061:80"
- "127.0.0.1:8061:8061"
volumes:
- ./iframely.config.local.js:/iframely/config.local.js:ro
restart: always
@ -47,3 +43,7 @@ services:
environment:
- POSTFIX_myhostname={{ domain }}
restart: "always"
volumes:
lemmy_db:
lemmy_pictshare:
lemmy_iframely:

View File

@ -1,5 +1,4 @@
proxy_cache_path /var/cache/lemmy_frontend levels=1:2 keys_zone=lemmy_frontend_cache:10m max_size=100m use_temp_path=off;
limit_req_zone $binary_remote_addr zone=lemmy_ratelimit:10m rate=1r/s;
server {
listen 80;
@ -37,7 +36,7 @@ server {
# It might be nice to compress JSON, but leaving that out to protect against potential
# compression+encryption information leak attacks like BREACH.
gzip on;
gzip_types text/css application/javascript image/svg+xml;
gzip_types text/css application/javascript;
gzip_vary on;
# Only connect to this site via HTTPS for the two years
@ -49,11 +48,8 @@ server {
add_header X-Frame-Options "DENY";
add_header X-XSS-Protection "1; mode=block";
# Upload limit for pictrs
client_max_body_size 20M;
# Rate limit
limit_req zone=lemmy_ratelimit burst=30 nodelay;
# Upload limit for pictshare
client_max_body_size 50M;
location / {
proxy_pass http://0.0.0.0:8536;
@ -74,21 +70,15 @@ server {
proxy_cache_min_uses 5;
}
# Redirect pictshare images to pictrs
location ~ /pictshare/(.*)$ {
return 301 /pictrs/image/$1;
}
location /pictshare/ {
proxy_pass http://0.0.0.0:8537/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# pict-rs images
location /pictrs {
location /pictrs/image {
proxy_pass http://0.0.0.0:8537/image;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Block the import
return 403;
if ($request_uri ~ \.(?:ico|gif|jpe?g|png|webp|bmp|mp4)$) {
add_header Cache-Control "public, max-age=31536000, immutable";
}
}
location /iframely/ {

26
ansible/uninstall.yml vendored
View File

@ -22,33 +22,27 @@
- name: stop docker-compose
docker_compose:
project_src: '{{lemmy_base_dir}}'
project_src: /lemmy/
state: absent
- name: delete data
file:
path: '{{item.path}}'
state: absent
file: path={{item.path}} state=absent
with_items:
- path: '{{lemmy_base_dir}}'
- path: '/etc/nginx/sites-enabled/lemmy.conf'
- { path: '/lemmy/' }
- { path: '/etc/nginx/sites-enabled/lemmy.conf' }
- name: Remove a volume
docker_volume:
name: '{{item.name}}'
state: absent
docker_volume: name={{item.name}} state=absent
with_items:
- name: 'lemmy_lemmy_db'
- name: 'lemmy_lemmy_pictshare'
- { name: 'lemmy_lemmy_db' }
- { name: 'lemmy_lemmy_pictshare' }
- name: delete entire ecloud folder
file:
path: '/mnt/repo-base/'
state: absent
file: path='/mnt/repo-base/' state=absent
when: delete_certs|bool
- name: remove certbot cronjob
cron:
name: certbot-renew-lemmy
state: absent
name=certbot-renew-lemmy
state=absent

26
docker/dev/Dockerfile vendored
View File

@ -10,7 +10,7 @@ RUN yarn install --pure-lockfile
COPY ui /app/ui
RUN yarn build
FROM ekidd/rust-musl-builder:1.42.0-openssl11 as rust
FROM ekidd/rust-musl-builder:1.40.0-openssl11 as rust
# Cache deps
WORKDIR /app
@ -18,32 +18,36 @@ RUN sudo chown -R rust:rust .
RUN USER=root cargo new server
WORKDIR /app/server
COPY server/Cargo.toml server/Cargo.lock ./
COPY server/lemmy_db ./lemmy_db
COPY server/lemmy_utils ./lemmy_utils
RUN sudo chown -R rust:rust .
RUN mkdir -p ./src/bin \
&& echo 'fn main() { println!("Dummy") }' > ./src/bin/main.rs
RUN cargo build
RUN find target/debug -type f -name "$(echo "lemmy_server" | tr '-' '_')*" -exec touch -t 200001010000 {} +
&& echo 'fn main() { println!("Dummy") }' > ./src/bin/main.rs
RUN cargo build --release
RUN rm -f ./target/x86_64-unknown-linux-musl/release/deps/lemmy_server*
COPY server/src ./src/
COPY server/migrations ./migrations/
# Build for debug
RUN cargo build
# Build for release
RUN cargo build --frozen --release
FROM ekidd/rust-musl-builder:1.42.0-openssl11 as docs
# Get diesel-cli on there just in case
# RUN cargo install diesel_cli --no-default-features --features postgres
FROM ekidd/rust-musl-builder:1.40.0-openssl11 as docs
WORKDIR /app
COPY docs ./docs
RUN sudo chown -R rust:rust .
RUN mdbook build docs/
FROM alpine:3.12
FROM alpine:3.10
# Install libpq for postgres
RUN apk add libpq
# Copy resources
COPY server/config/defaults.hjson /config/defaults.hjson
COPY --from=rust /app/server/target/x86_64-unknown-linux-musl/debug/lemmy_server /app/lemmy
COPY --from=rust /app/server/target/x86_64-unknown-linux-musl/release/lemmy_server /app/lemmy
COPY --from=docs /app/docs/book/ /app/dist/documentation/
COPY --from=node /app/ui/dist /app/dist

79
docker/dev/Dockerfile.aarch64 vendored Normal file
View File

@ -0,0 +1,79 @@
FROM node:10-jessie as node
WORKDIR /app/ui
# Cache deps
COPY ui/package.json ui/yarn.lock ./
RUN yarn install --pure-lockfile
# Build
COPY ui /app/ui
RUN yarn build
# contains qemu-*-static for cross-compilation
FROM multiarch/qemu-user-static as qemu
FROM arm64v8/rust:1.40-buster as rust
COPY --from=qemu /usr/bin/qemu-aarch64-static /usr/bin
#COPY --from=qemu /usr/bin/qemu-arm-static /usr/bin
# Install musl
#RUN apt-get update && apt-get install -y mc
#RUN apt-get install -y musl-tools mc
#libpq-dev mc
#RUN rustup target add ${TARGET}
# Cache deps
WORKDIR /app
RUN USER=root cargo new server
WORKDIR /app/server
COPY server/Cargo.toml server/Cargo.lock ./
RUN mkdir -p ./src/bin \
&& echo 'fn main() { println!("Dummy") }' > ./src/bin/main.rs
RUN cargo build --release
# RUN cargo build
COPY server/src ./src/
COPY server/migrations ./migrations/
RUN rm -f ./target/release/deps/lemmy_server* ; rm -f ./target/debug/deps/lemmy_server*
# build for release
RUN cargo build --frozen --release
# RUN cargo build --frozen
# Get diesel-cli on there just in case
# RUN cargo install diesel_cli --no-default-features --features postgres
# RUN cp /app/server/target/debug/lemmy_server /app/server/ready
RUN cp /app/server/target/release/lemmy_server /app/server/ready
#FROM alpine:3.10
# debian because build with dynamic linking with debian:buster
FROM arm64v8/debian:buster-slim as lemmy
#COPY --from=qemu /usr/bin/qemu-arm-static /usr/bin
COPY --from=qemu /usr/bin/qemu-aarch64-static /usr/bin
# Install libpq for postgres
#RUN apk add libpq
RUN apt-get update && apt-get install -y libpq5
RUN addgroup --gid 1000 lemmy
# for alpine
#RUN adduser -D -s /bin/sh -u 1000 -G lemmy lemmy
# for debian
RUN adduser --disabled-password --shell /bin/sh --uid 1000 --ingroup lemmy lemmy
# Copy resources
COPY server/config/defaults.hjson /config/defaults.hjson
COPY --from=rust /app/server/ready /app/lemmy
COPY --from=node /app/ui/dist /app/dist
RUN chown lemmy:lemmy /app/lemmy
USER lemmy
EXPOSE 8536
CMD ["/app/lemmy"]

79
docker/dev/Dockerfile.armv7hf vendored Normal file
View File

@ -0,0 +1,79 @@
FROM node:10-jessie as node
WORKDIR /app/ui
# Cache deps
COPY ui/package.json ui/yarn.lock ./
RUN yarn install --pure-lockfile
# Build
COPY ui /app/ui
RUN yarn build
# contains qemu-*-static for cross-compilation
FROM multiarch/qemu-user-static as qemu
FROM arm32v7/rust:1.37-buster as rust
#COPY --from=qemu /usr/bin/qemu-aarch64-static /usr/bin
COPY --from=qemu /usr/bin/qemu-arm-static /usr/bin
# Install musl
#RUN apt-get update && apt-get install -y mc
#RUN apt-get install -y musl-tools mc
#libpq-dev mc
#RUN rustup target add ${TARGET}
# Cache deps
WORKDIR /app
RUN USER=root cargo new server
WORKDIR /app/server
COPY server/Cargo.toml server/Cargo.lock ./
RUN mkdir -p ./src/bin \
&& echo 'fn main() { println!("Dummy") }' > ./src/bin/main.rs
#RUN cargo build --release
# RUN cargo build
RUN RUSTFLAGS='-Ccodegen-units=1' cargo build
COPY server/src ./src/
COPY server/migrations ./migrations/
RUN rm -f ./target/release/deps/lemmy_server* ; rm -f ./target/debug/deps/lemmy_server*
# build for release
#RUN cargo build --frozen --release
RUN cargo build --frozen
# Get diesel-cli on there just in case
# RUN cargo install diesel_cli --no-default-features --features postgres
RUN cp /app/server/target/debug/lemmy_server /app/server/ready
#RUN cp /app/server/target/release/lemmy_server /app/server/ready
#FROM alpine:3.10
# debian because build with dynamic linking with debian:buster
FROM arm32v7/debian:buster-slim as lemmy
COPY --from=qemu /usr/bin/qemu-arm-static /usr/bin
# Install libpq for postgres
#RUN apk add libpq
RUN apt-get update && apt-get install -y libpq5
RUN addgroup --gid 1000 lemmy
# for alpine
#RUN adduser -D -s /bin/sh -u 1000 -G lemmy lemmy
# for debian
RUN adduser --disabled-password --shell /bin/sh --uid 1000 --ingroup lemmy lemmy
# Copy resources
COPY server/config/defaults.hjson /config/defaults.hjson
COPY --from=rust /app/server/ready /app/lemmy
COPY --from=node /app/ui/dist /app/dist
RUN chown lemmy:lemmy /app/lemmy
USER lemmy
EXPOSE 8536
CMD ["/app/lemmy"]

88
docker/dev/Dockerfile.libc vendored Normal file
View File

@ -0,0 +1,88 @@
# can be build on x64, arm32, arm64 platforms
# to build on target platform run
# docker build -f Dockerfile.libc -t dessalines/lemmy:version ../..
#
# to use docker buildx run
# docker buildx build --platform linux/amd64,linux/arm64 -f Dockerfile.libc -t YOURNAME/lemmy --push ../..
FROM node:12-buster as node
# use this if use docker buildx
#FROM --platform=$BUILDPLATFORM node:12-buster as node
WORKDIR /app/ui
# Cache deps
COPY ui/package.json ui/yarn.lock ./
RUN yarn install --pure-lockfile --network-timeout 100000
# Build
COPY ui /app/ui
RUN yarn build
FROM rust:1.40 as rust
# Cache deps
WORKDIR /app
RUN USER=root cargo new server
WORKDIR /app/server
COPY server/Cargo.toml server/Cargo.lock ./
RUN mkdir -p ./src/bin \
&& echo 'fn main() { println!("Dummy") }' > ./src/bin/main.rs
RUN cargo build --release
#RUN cargo build && \
# rm -f ./target/release/deps/lemmy_server* ; rm -f ./target/debug/deps/lemmy_server*
COPY server/src ./src/
COPY server/migrations ./migrations/
# build for release
# workaround for https://github.com/rust-lang/rust/issues/62896
#RUN RUSTFLAGS='-Ccodegen-units=1' cargo build --release
RUN cargo build --release --frozen
#RUN cargo build --frozen
# Get diesel-cli on there just in case
# RUN cargo install diesel_cli --no-default-features --features postgres
# make result place always the same for lemmy container
RUN cp /app/server/target/release/lemmy_server /app/server/ready
#RUN cp /app/server/target/debug/lemmy_server /app/server/ready
FROM rust:1.40 as docs
WORKDIR /app
# Build docs
COPY docs ./docs
RUN cargo install mdbook
RUN mdbook build docs/
#FROM alpine:3.10
# debian because build with dynamic linking with debian:buster
FROM debian:buster as lemmy
# Install libpq for postgres
#RUN apk add libpq
RUN apt-get update && apt-get install -y libpq5
RUN addgroup --gid 1000 lemmy
# for alpine
#RUN adduser -D -s /bin/sh -u 1000 -G lemmy lemmy
# for debian
RUN adduser --disabled-password --shell /bin/sh --uid 1000 --ingroup lemmy lemmy
# Copy resources
COPY server/config/defaults.hjson /config/defaults.hjson
COPY --from=node /app/ui/dist /app/dist
COPY --from=docs /app/docs/book/ /app/dist/documentation/
COPY --from=rust /app/server/ready /app/lemmy
RUN chown lemmy:lemmy /app/lemmy
USER lemmy
EXPOSE 8536
CMD ["/app/lemmy"]

73
docker/dev/deploy.sh vendored Executable file
View File

@ -0,0 +1,73 @@
#!/bin/sh
git checkout master
# Creating the new tag
new_tag="$1"
git tag $new_tag
third_semver=$(echo $new_tag | cut -d "." -f 3)
# Setting the version on the front end
cd ../../
echo "export const version: string = '$(git describe --tags)';" > "ui/src/version.ts"
git add "ui/src/version.ts"
# Setting the version on the backend
echo "pub const VERSION: &str = \"$(git describe --tags)\";" > "server/src/version.rs"
git add "server/src/version.rs"
# Setting the version for Ansible
git describe --tags > "ansible/VERSION"
git add "ansible/VERSION"
cd docker/dev
# Changing the docker-compose prod
sed -i "s/dessalines\/lemmy:.*/dessalines\/lemmy:$new_tag/" ../prod/docker-compose.yml
sed -i "s/dessalines\/lemmy:.*/dessalines\/lemmy:$new_tag/" ../../ansible/templates/docker-compose.yml
git add ../prod/docker-compose.yml
git add ../../ansible/templates/docker-compose.yml
# The commit
git commit -m"Version $new_tag"
# Rebuilding docker
docker-compose build
docker tag dev_lemmy:latest dessalines/lemmy:x64-$new_tag
docker push dessalines/lemmy:x64-$new_tag
# Build for Raspberry Pi / other archs
# Arm currently not working
# docker build -t lemmy:armv7hf -f Dockerfile.armv7hf ../../
# docker tag lemmy:armv7hf dessalines/lemmy:armv7hf-$new_tag
# docker push dessalines/lemmy:armv7hf-$new_tag
# aarch64
# Only do this on major releases (IE the third semver is 0)
if [ $third_semver -eq 0 ]; then
# Registering qemu binaries
docker run --rm --privileged multiarch/qemu-user-static:register --reset
docker build -t lemmy:aarch64 -f Dockerfile.aarch64 ../../
docker tag lemmy:aarch64 dessalines/lemmy:arm64-$new_tag
docker push dessalines/lemmy:arm64-$new_tag
fi
# Creating the manifest for the multi-arch build
if [ $third_semver -eq 0 ]; then
docker manifest create dessalines/lemmy:$new_tag \
dessalines/lemmy:x64-$new_tag \
dessalines/lemmy:arm64-$new_tag
else
docker manifest create dessalines/lemmy:$new_tag \
dessalines/lemmy:x64-$new_tag
fi
docker manifest push dessalines/lemmy:$new_tag
# Push
git push origin $new_tag
git push
# Pushing to any ansible deploys
cd ../../ansible
ansible-playbook lemmy.yml --become

12
docker/dev/dev_deploy.sh vendored Executable file
View File

@ -0,0 +1,12 @@
#!/bin/sh
# Building from the dev branch for dev servers
git checkout dev
# Rebuilding dev docker
docker-compose build
docker tag dev_lemmy:latest dessalines/lemmy:dev
docker push dessalines/lemmy:dev
# SSH and pull it
ssh $LEMMY_USER@$LEMMY_HOST "cd ~/git/lemmy/docker/dev && docker pull dessalines/lemmy:dev && docker-compose up -d"

View File

@ -1,7 +1,15 @@
version: '3.3'
services:
lemmy_db:
image: postgres:12-alpine
environment:
- POSTGRES_USER=lemmy
- POSTGRES_PASSWORD=password
- POSTGRES_DB=lemmy
volumes:
- lemmy_db:/var/lib/postgresql/data
restart: always
lemmy:
build:
context: ../../
@ -9,40 +17,25 @@ services:
ports:
- "127.0.0.1:8536:8536"
restart: always
environment:
- RUST_LOG=debug
volumes:
- ../lemmy.hjson:/config/config.hjson
- ../lemmy.hjson:/config/config.hjson:ro
depends_on:
- pictrs
- postgres
- iframely
postgres:
image: postgres:12-alpine
- lemmy_db
lemmy_pictshare:
image: shtripok/pictshare:latest
ports:
- "127.0.0.1:5432:5432"
environment:
- POSTGRES_USER=lemmy
- POSTGRES_PASSWORD=password
- POSTGRES_DB=lemmy
- "127.0.0.1:8537:80"
volumes:
- ./volumes/postgres:/var/lib/postgresql/data
- lemmy_pictshare:/usr/share/nginx/html/data
restart: always
pictrs:
image: asonix/pictrs:v0.1.13-r0
ports:
- "127.0.0.1:8537:8080"
user: 991:991
volumes:
- ./volumes/pictrs:/mnt
restart: always
iframely:
lemmy_iframely:
image: dogbin/iframely:latest
ports:
- "127.0.0.1:8061:80"
- "127.0.0.1:8061:8061"
volumes:
- ../iframely.config.local.js:/iframely/config.local.js:ro
restart: always
volumes:
lemmy_db:
lemmy_pictshare:
lemmy_iframely:

View File

@ -1,6 +1,2 @@
#!/bin/sh
set -e
export COMPOSE_DOCKER_CLI_BUILD=1
export DOCKER_BUILDKIT=1
docker-compose up -d --no-deps --build

View File

@ -1,18 +0,0 @@
#!/bin/bash
set -e
BRANCH=$1
git checkout $BRANCH
export COMPOSE_DOCKER_CLI_BUILD=1
export DOCKER_BUILDKIT=1
# Rebuilding dev docker
sudo docker build ../../ -f . -t "dessalines/lemmy:$BRANCH"
sudo docker push "dessalines/lemmy:$BRANCH"
# Run the playbook
pushd ../../../lemmy-ansible
ansible-playbook -i test playbooks/site.yml
popd

View File

@ -1,36 +0,0 @@
#!/bin/bash
set -e
# make sure there are no old containers or old data around
sudo docker-compose --file ../federation/docker-compose.yml --project-directory . down
sudo rm -rf volumes
pushd ../../server/
cargo build
popd
pushd ../../ui
yarn
popd
mkdir -p volumes/pictrs_{alpha,beta,gamma}
sudo chown -R 991:991 volumes/pictrs_{alpha,beta,gamma}
sudo docker build ../../ --file ../federation/Dockerfile --tag lemmy-federation:latest
sudo mkdir -p volumes/pictrs_alpha
sudo chown -R 991:991 volumes/pictrs_alpha
sudo docker-compose --file ../federation/docker-compose.yml --project-directory . up -d
pushd ../../ui
echo "Waiting for Lemmy to start..."
while [[ "$(curl -s -o /dev/null -w '%{http_code}' 'localhost:8540/api/v1/site')" != "200" ]]; do sleep 1; done
while [[ "$(curl -s -o /dev/null -w '%{http_code}' 'localhost:8550/api/v1/site')" != "200" ]]; do sleep 1; done
while [[ "$(curl -s -o /dev/null -w '%{http_code}' 'localhost:8560/api/v1/site')" != "200" ]]; do sleep 1; done
yarn api-test || true
popd
sudo docker-compose --file ../federation/docker-compose.yml --project-directory . down
sudo rm -r volumes/

View File

@ -1,19 +0,0 @@
#!/bin/bash
set -e
sudo rm -rf volumes
pushd ../../server/
cargo build
popd
pushd ../../ui
yarn
popd
mkdir -p volumes/pictrs_{alpha,beta,gamma}
sudo chown -R 991:991 volumes/pictrs_{alpha,beta,gamma}
sudo docker build ../../ --file ../federation/Dockerfile --tag lemmy-federation:latest
sudo docker-compose --file ../federation/docker-compose.yml --project-directory . up

View File

@ -1,10 +0,0 @@
#!/bin/bash
set -xe
pushd ../../ui
echo "Waiting for Lemmy to start..."
while [[ "$(curl -s -o /dev/null -w '%{http_code}' 'localhost:8540/api/v1/site')" != "200" ]]; do sleep 1; done
while [[ "$(curl -s -o /dev/null -w '%{http_code}' 'localhost:8550/api/v1/site')" != "200" ]]; do sleep 1; done
while [[ "$(curl -s -o /dev/null -w '%{http_code}' 'localhost:8560/api/v1/site')" != "200" ]]; do sleep 1; done
yarn api-test || true
popd

View File

@ -1,17 +0,0 @@
FROM ekidd/rust-musl-builder:1.42.0-openssl11
USER root
RUN mkdir /app/dist/documentation/ -p \
&& addgroup --gid 1001 lemmy \
&& adduser --gecos "" --disabled-password --shell /bin/sh -u 1001 --ingroup lemmy lemmy
# Copy resources
COPY server/config/defaults.hjson /app/config/defaults.hjson
COPY ui/dist /app/dist
COPY server/target/debug/lemmy_server /app/lemmy
RUN chown lemmy:lemmy /app/ -R
USER lemmy
EXPOSE 8536
WORKDIR /app
CMD ["/app/lemmy"]

View File

@ -1,113 +0,0 @@
version: '3.3'
services:
nginx:
image: nginx:1.17-alpine
ports:
- "8540:8540"
- "8550:8550"
- "8560:8560"
volumes:
# Hack to make this work from both docker/federation/ and docker/federation-test/
- ../federation/nginx.conf:/etc/nginx/nginx.conf
restart: on-failure
depends_on:
- lemmy-alpha
- pictrs
- lemmy-beta
- lemmy-gamma
- iframely
pictrs:
restart: always
image: asonix/pictrs:v0.1.13-r0
user: 991:991
volumes:
- ./volumes/pictrs_alpha:/mnt
lemmy-alpha:
image: lemmy-federation:latest
environment:
- LEMMY_HOSTNAME=lemmy-alpha:8540
- LEMMY_DATABASE_URL=postgres://lemmy:password@postgres_alpha:5432/lemmy
- LEMMY_JWT_SECRET=changeme
- LEMMY_FRONT_END_DIR=/app/dist
- LEMMY_FEDERATION__ENABLED=true
- LEMMY_FEDERATION__TLS_ENABLED=false
- LEMMY_FEDERATION__ALLOWED_INSTANCES=lemmy-beta,lemmy-gamma
- LEMMY_PORT=8540
- LEMMY_SETUP__ADMIN_USERNAME=lemmy_alpha
- LEMMY_SETUP__ADMIN_PASSWORD=lemmy
- LEMMY_SETUP__SITE_NAME=lemmy-alpha
- RUST_BACKTRACE=1
- RUST_LOG=debug
depends_on:
- postgres_alpha
postgres_alpha:
image: postgres:12-alpine
environment:
- POSTGRES_USER=lemmy
- POSTGRES_PASSWORD=password
- POSTGRES_DB=lemmy
volumes:
- ./volumes/postgres_alpha:/var/lib/postgresql/data
lemmy-beta:
image: lemmy-federation:latest
environment:
- LEMMY_HOSTNAME=lemmy-beta:8550
- LEMMY_DATABASE_URL=postgres://lemmy:password@postgres_beta:5432/lemmy
- LEMMY_JWT_SECRET=changeme
- LEMMY_FRONT_END_DIR=/app/dist
- LEMMY_FEDERATION__ENABLED=true
- LEMMY_FEDERATION__TLS_ENABLED=false
- LEMMY_FEDERATION__ALLOWED_INSTANCES=lemmy-alpha,lemmy-gamma
- LEMMY_PORT=8550
- LEMMY_SETUP__ADMIN_USERNAME=lemmy_beta
- LEMMY_SETUP__ADMIN_PASSWORD=lemmy
- LEMMY_SETUP__SITE_NAME=lemmy-beta
- RUST_BACKTRACE=1
- RUST_LOG=debug
depends_on:
- postgres_beta
postgres_beta:
image: postgres:12-alpine
environment:
- POSTGRES_USER=lemmy
- POSTGRES_PASSWORD=password
- POSTGRES_DB=lemmy
volumes:
- ./volumes/postgres_beta:/var/lib/postgresql/data
lemmy-gamma:
image: lemmy-federation:latest
environment:
- LEMMY_HOSTNAME=lemmy-gamma:8560
- LEMMY_DATABASE_URL=postgres://lemmy:password@postgres_gamma:5432/lemmy
- LEMMY_JWT_SECRET=changeme
- LEMMY_FRONT_END_DIR=/app/dist
- LEMMY_FEDERATION__ENABLED=true
- LEMMY_FEDERATION__TLS_ENABLED=false
- LEMMY_FEDERATION__ALLOWED_INSTANCES=lemmy-alpha,lemmy-beta
- LEMMY_PORT=8560
- LEMMY_SETUP__ADMIN_USERNAME=lemmy_gamma
- LEMMY_SETUP__ADMIN_PASSWORD=lemmy
- LEMMY_SETUP__SITE_NAME=lemmy-gamma
- RUST_BACKTRACE=1
- RUST_LOG=debug
depends_on:
- postgres_gamma
postgres_gamma:
image: postgres:12-alpine
environment:
- POSTGRES_USER=lemmy
- POSTGRES_PASSWORD=password
- POSTGRES_DB=lemmy
volumes:
- ./volumes/postgres_gamma:/var/lib/postgresql/data
iframely:
image: dogbin/iframely:latest
volumes:
- ../iframely.config.local.js:/iframely/config.local.js:ro
restart: always

View File

@ -1,125 +0,0 @@
events {
worker_connections 1024;
}
http {
server {
listen 8540;
server_name 127.0.0.1;
access_log off;
# Upload limit for pictshare
client_max_body_size 50M;
location / {
proxy_pass http://lemmy-alpha:8540;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# pict-rs images
location /pictrs {
location /pictrs/image {
proxy_pass http://pictrs:8080/image;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Block the import
return 403;
}
location /iframely/ {
proxy_pass http://iframely:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 8550;
server_name 127.0.0.1;
access_log off;
# Upload limit for pictshare
client_max_body_size 50M;
location / {
proxy_pass http://lemmy-beta:8550;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# pict-rs images
location /pictrs {
location /pictrs/image {
proxy_pass http://pictrs:8080/image;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Block the import
return 403;
}
location /iframely/ {
proxy_pass http://iframely:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 8560;
server_name 127.0.0.1;
access_log off;
# Upload limit for pictshare
client_max_body_size 50M;
location / {
proxy_pass http://lemmy-gamma:8560;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# pict-rs images
location /pictrs {
location /pictrs/image {
proxy_pass http://pictrs:8080/image;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Block the import
return 403;
}
location /iframely/ {
proxy_pass http://iframely:80/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}

View File

@ -1,28 +0,0 @@
#!/bin/bash
set -e
# already start rust build in the background
pushd ../../server/ || exit
cargo build &
popd || exit
if [ "$1" = "-yarn" ]; then
pushd ../../ui/ || exit
yarn
yarn build
popd || exit
fi
# wait for rust build to finish
pushd ../../server/ || exit
cargo build
popd || exit
sudo docker build ../../ --file Dockerfile -t lemmy-federation:latest
for Item in alpha beta gamma ; do
sudo mkdir -p volumes/pictrs_$Item
sudo chown -R 991:991 volumes/pictrs_$Item
done
sudo docker-compose up

View File

@ -37,7 +37,7 @@
},
*/
port: 80, //can be overridden by PORT env var
port: 8061, //can be overridden by PORT env var
host: '0.0.0.0', // Dockers beware. See https://github.com/itteco/iframely/issues/132#issuecomment-242991246
//can be overridden by HOST env var

52
docker/lemmy.hjson vendored
View File

@ -1,7 +1,18 @@
{
# for more info about the config, check out the documentation
# https://dev.lemmy.ml/docs/administration_configuration.html
database: {
# username to connect to postgres
user: "lemmy"
# password to connect to postgres
password: "password"
# host where postgres is running
host: "lemmy_db"
# port where postgres can be accessed
port: 5432
# name of the postgres database for lemmy
database: "lemmy"
# maximum number of active sql connections
pool_size: 5
}
# the domain name of your instance (eg "dev.lemmy.ml")
hostname: "my_domain"
# address where lemmy should listen for incoming requests
@ -10,20 +21,27 @@
port: 8536
# json web token for authorization between server and client
jwt_secret: "changeme"
# settings related to the postgresql database
database: {
# name of the postgres database for lemmy
database: "lemmy"
# username to connect to postgres
user: "lemmy"
# password to connect to postgres
password: "password"
# host where postgres is running
host: "postgres"
}
# The location of the frontend
# The dir for the front end
front_end_dir: "/app/dist"
# # optional: email sending configuration
# whether to enable activitypub federation. this feature is in alpha, do not enable in production, as might
# cause problems like remote instances fetching and permanently storing bad data.
federation_enabled: false
# rate limits for various user actions, by user ip
rate_limit: {
# maximum number of messages created in interval
message: 180
# interval length for message limit
message_per_second: 60
# maximum number of posts created in interval
post: 6
# interval length for post limit
post_per_second: 600
# maximum number of registrations in interval
register: 3
# interval length for registration limit
register_per_second: 3600
}
# # email sending configuration
# email: {
# # hostname of the smtp server
# smtp_server: ""
@ -31,7 +49,7 @@
# smtp_login: ""
# # password to login to the smtp server
# smtp_password: ""
# # address to send emails from, eg "noreply@your-instance.com"
# # address to send emails from, eg "info@your-instance.com"
# smtp_from_address: ""
# }
}

View File

@ -1,65 +0,0 @@
ARG RUST_BUILDER_IMAGE=shtripok/rust-musl-builder:arm
FROM $RUST_BUILDER_IMAGE as rust
#ARG RUSTRELEASEDIR="debug"
ARG RUSTRELEASEDIR="release"
# Cache deps
WORKDIR /app
RUN sudo chown -R rust:rust .
RUN USER=root cargo new server
WORKDIR /app/server
COPY server/Cargo.toml server/Cargo.lock ./
COPY server/lemmy_db ./lemmy_db
COPY server/lemmy_utils ./lemmy_utils
RUN mkdir -p ./src/bin \
&& echo 'fn main() { println!("Dummy") }' > ./src/bin/main.rs
RUN cargo build --release
RUN find target/$CARGO_BUILD_TARGET/$RUSTRELEASEDIR -type f -name "$(echo "lemmy_server" | tr '-' '_')*" -exec touch -t 200001010000 {} +
COPY server/src ./src/
COPY server/migrations ./migrations/
# build for release
# workaround for https://github.com/rust-lang/rust/issues/62896
RUN cargo build --frozen --release
# reduce binary size
RUN strip ./target/$CARGO_BUILD_TARGET/$RUSTRELEASEDIR/lemmy_server
RUN cp ./target/$CARGO_BUILD_TARGET/$RUSTRELEASEDIR/lemmy_server /app/server/
FROM $RUST_BUILDER_IMAGE as docs
WORKDIR /app
COPY --chown=rust:rust docs ./docs
RUN mdbook build docs/
FROM node:12-buster as node
WORKDIR /app/ui
# Cache deps
COPY ui/package.json ui/yarn.lock ./
RUN yarn install --pure-lockfile --network-timeout 600000
# Build
COPY ui /app/ui
RUN yarn build
FROM alpine:3.12 as lemmy
# Install libpq for postgres
RUN apk add libpq
RUN addgroup -g 1000 lemmy
RUN adduser -D -s /bin/sh -u 1000 -G lemmy lemmy
# Copy resources
COPY --chown=lemmy:lemmy server/config/defaults.hjson /config/defaults.hjson
COPY --chown=lemmy:lemmy --from=rust /app/server/lemmy_server /app/lemmy
COPY --chown=lemmy:lemmy --from=docs /app/docs/book/ /app/dist/documentation/
COPY --chown=lemmy:lemmy --from=node /app/ui/dist /app/dist
RUN chown lemmy:lemmy /app/lemmy
USER lemmy
EXPOSE 8536
CMD ["/app/lemmy"]

60
docker/prod/deploy.sh vendored
View File

@ -1,60 +0,0 @@
#!/bin/sh
set -e
git checkout master
# Import translations
git fetch weblate
git merge weblate/master
# Creating the new tag
new_tag="$1"
third_semver=$(echo $new_tag | cut -d "." -f 3)
# Setting the version on the front end
cd ../../
echo "export const version: string = '$new_tag';" > "ui/src/version.ts"
git add "ui/src/version.ts"
# Setting the version on the backend
echo "pub const VERSION: &str = \"$new_tag\";" > "server/src/version.rs"
git add "server/src/version.rs"
# Setting the version for Ansible
echo $new_tag > "ansible/VERSION"
git add "ansible/VERSION"
cd docker/prod || exit
# Changing the docker-compose prod
sed -i "s/dessalines\/lemmy:.*/dessalines\/lemmy:$new_tag/" ../prod/docker-compose.yml
sed -i "s/dessalines\/lemmy:.*/dessalines\/lemmy:$new_tag/" ../../ansible/templates/docker-compose.yml
git add ../prod/docker-compose.yml
git add ../../ansible/templates/docker-compose.yml
# The commit
git commit -m"Version $new_tag"
git tag $new_tag
export COMPOSE_DOCKER_CLI_BUILD=1
export DOCKER_BUILDKIT=1
# Rebuilding docker
if [ $third_semver -eq 0 ]; then
# TODO get linux/arm/v7 build working
# Build for Raspberry Pi / other archs too
docker buildx build --platform linux/amd64,linux/arm64 ../../ \
--file Dockerfile \
--tag dessalines/lemmy:$new_tag \
--push
else
docker buildx build --platform linux/amd64 ../../ \
--file Dockerfile \
--tag dessalines/lemmy:$new_tag \
--push
fi
# Push
git push origin $new_tag
git push
# Pushing to any ansible deploys
cd ../../../lemmy-ansible || exit
ansible-playbook -i prod playbooks/site.yml --vault-password-file vault_pass

View File

@ -1,44 +1,39 @@
version: '2.2'
version: '3.3'
services:
postgres:
lemmy_db:
image: postgres:12-alpine
environment:
- POSTGRES_USER=lemmy
- POSTGRES_PASSWORD=password
- POSTGRES_DB=lemmy
volumes:
- ./volumes/postgres:/var/lib/postgresql/data
- lemmy_db:/var/lib/postgresql/data
restart: always
lemmy:
image: dessalines/lemmy:v0.7.21
image: dessalines/lemmy:v0.6.22
ports:
- "127.0.0.1:8536:8536"
restart: always
environment:
- RUST_LOG=error
volumes:
- ./lemmy.hjson:/config/config.hjson
- ./lemmy.hjson:/config/config.hjson:ro
depends_on:
- postgres
- pictrs
- iframely
pictrs:
image: asonix/pictrs:v0.1.13-r0
ports:
- "127.0.0.1:8537:8080"
user: 991:991
- lemmy_db
lemmy_pictshare:
image: shtripok/pictshare:latest
ports:
- "127.0.0.1:8537:80"
volumes:
- ./volumes/pictrs:/mnt
- lemmy_pictshare:/usr/share/nginx/html/data
restart: always
iframely:
lemmy_iframely:
image: dogbin/iframely:latest
ports:
- "127.0.0.1:8061:80"
- "127.0.0.1:8061:8061"
volumes:
- ./iframely.config.local.js:/iframely/config.local.js:ro
restart: always
mem_limit: 100m
volumes:
lemmy_db:
lemmy_pictshare:
lemmy_iframely:

View File

@ -1,92 +0,0 @@
#!/bin/bash
set -e
if [[ $(id -u) != 0 ]]; then
echo "This migration needs to be run as root"
exit
fi
if [[ ! -f docker-compose.yml ]]; then
echo "No docker-compose.yml found in current directory. Is this the right folder?"
exit
fi
if ! which jq > /dev/null; then
echo "jq must be installed to run this migration. On ubuntu systems, try 'sudo apt-get install jq'"
exit
fi
# Fixing pictrs permissions
mkdir -p volumes/pictrs
sudo chown -R 991:991 volumes/pictrs
echo "Restarting docker-compose, making sure that pictrs is started and pictshare is removed"
docker-compose up -d --remove-orphans
if [[ -z $(docker-compose ps | grep pictrs) ]]; then
echo "Pict-rs is not running, make sure you update Lemmy first"
exit
fi
# echo "Stopping Lemmy so that users dont upload new images during the migration"
# docker-compose stop lemmy
CRASHED_ON=()
pushd volumes/pictshare/
echo "Importing pictshare images to pict-rs..."
IMAGE_NAMES=*
for image in $IMAGE_NAMES; do
IMAGE_PATH="$(pwd)/$image/$image"
if [[ ! -f $IMAGE_PATH ]]; then
continue
fi
res=$(curl -s -F "images[]=@$IMAGE_PATH" http://127.0.0.1:8537/import | jq .msg)
if [ "${res}" == "" ]; then
echo -n "C" >&2
echo ""
CRASHED_ON+=("${IMAGE_PATH}")
echo "Failed to import $IMAGE_PATH with no error message"
echo " assuming crash, sleeping"
sleep 10
continue
fi
if [ "${res}" != "\"ok\"" ]; then
echo -n "F" >&2
echo ""
echo "Failed to import $IMAGE_PATH"
echo " Reason: ${res}"
else
echo -n "." >&2
fi
done
for image in ${CRASHED_ON[@]}; do
echo "Retrying ${image}"
res=$(curl -s -F "images[]=@$IMAGE_PATH" http://127.0.0.1:8537/import | jq .msg)
if [ "${res}" != "\"ok\"" ]; then
echo -n "F" >&2
echo ""
echo "Failed to upload ${image} on 2nd attempt"
echo " Reason: ${res}"
else
echo -n "." >&2
fi
done
echo "Fixing permissions on pictshare folder"
find . -type d -exec chmod 755 {} \;
find . -type f -exec chmod 644 {} \;
popd
echo "Rewrite image links in Lemmy database"
docker-compose exec -u postgres postgres psql -U lemmy -c "UPDATE user_ SET avatar = REPLACE(avatar, 'pictshare', 'pictrs/image') WHERE avatar is not null;"
docker-compose exec -u postgres postgres psql -U lemmy -c "UPDATE post SET url = REPLACE(url, 'pictshare', 'pictrs/image') WHERE url is not null;"
echo "Moving pictshare data folder to pictshare_backup"
mv volumes/pictshare volumes/pictshare_backup
echo "Migration done, starting Lemmy again"
echo "If everything went well, you can delete ./volumes/pictshare_backup/"
docker-compose start lemmy

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

7
docs/src/SUMMARY.md vendored
View File

@ -4,19 +4,14 @@
- [Features](about_features.md)
- [Goals](about_goals.md)
- [Post and Comment Ranking](about_ranking.md)
- [Guide](about_guide.md)
- [Markdown Guide](about_markdown_guide.md)
Review

This is a URL change that I have to fix in some other places, but I can do that.

This is a URL change that I have to fix in some other places, but I can do that.
Review

Actually now that I think of it, this should stay as about_guide.md I want this to be the general guide page not just for markdown.

Actually now that I think of it, this should stay as `about_guide.md` I want this to be the general guide page not just for markdown.
Review

Sorry missed your comment. What else do you want to put on that page that is not about markdown? I think other things should really get their own page.

Sorry missed your comment. What else do you want to put on that page that is not about markdown? I think other things should really get their own page.
Review

Not sure yet, but it could be lots of things in the future, or general FAQ stuff. Ideally lemmy shouldn't need instructions, but I realize some things like hotkeys and such are needed.

Not sure yet, but it could be lots of things in the future, or general FAQ stuff. Ideally lemmy shouldn't need instructions, but I realize some things like hotkeys and such are needed.
Review

If we need a FAQ we will add it, but this seems a bit too long for that. And we need to document it somewhere, even if we manage that most people dont need the documentation.

If we need a FAQ we will add it, but this seems a bit too long for that. And we need to document it somewhere, even if we manage that most people dont need the documentation.
Review

The main thing is that I want to keep the links in the front end to the general guide, not just the markdown one. So we could split the markdown specific stuff into a different sub-category, but I want the about_guide.md url to stay the same.

The main thing is that I want to keep the links in the front end to the general guide, not just the markdown one. So we could split the markdown specific stuff into a different sub-category, but I want the `about_guide.md` url to stay the same.
Review

Alright I will let you handle this then.

Alright I will let you handle this then.
- [Administration](administration.md)
- [Install with Docker](administration_install_docker.md)
- [Install with Ansible](administration_install_ansible.md)
- [Install with Kubernetes](administration_install_kubernetes.md)
- [Configuration](administration_configuration.md)
- [Backup and Restore](administration_backup_and_restore.md)
- [Contributing](contributing.md)
- [Docker Development](contributing_docker_development.md)
- [Local Development](contributing_local_development.md)
- [Tests](contributing_tests.md)
- [Federation Development](contributing_federation_development.md)
- [Websocket/HTTP API](contributing_websocket_http_api.md)
- [ActivityPub API Outline](contributing_apub_api_outline.md)
- [Theming Guide](contributing_theming.md)
- [Lemmy Council](lemmy_council.md)

4
docs/src/about.md vendored
View File

@ -2,9 +2,9 @@
Front Page|Post
---|---
![main screen](https://raw.githubusercontent.com/LemmyNet/lemmy/master/docs/img/main_screen.png)|![chat screen](https://raw.githubusercontent.com/LemmyNet/lemmy/master/docs/img/chat_screen.png)
![main screen](https://i.imgur.com/kZSRcRu.png)|![chat screen](https://i.imgur.com/4XghNh6.png)
[Lemmy](https://github.com/LemmyNet/lemmy) is similar to sites like [Reddit](https://reddit.com), [Lobste.rs](https://lobste.rs), [Raddle](https://raddle.me), or [Hacker News](https://news.ycombinator.com/): you subscribe to forums you're interested in, post links and discussions, then vote, and comment on them. Behind the scenes, it is very different; anyone can easily run a server, and all these servers are federated (think email), and connected to the same universe, called the [Fediverse](https://en.wikipedia.org/wiki/Fediverse).
[Lemmy](https://github.com/dessalines/lemmy) is similar to sites like [Reddit](https://reddit.com), [Lobste.rs](https://lobste.rs), [Raddle](https://raddle.me), or [Hacker News](https://news.ycombinator.com/): you subscribe to forums you're interested in, post links and discussions, then vote, and comment on them. Behind the scenes, it is very different; anyone can easily run a server, and all these servers are federated (think email), and connected to the same universe, called the [Fediverse](https://en.wikipedia.org/wiki/Fediverse).
For a link aggregator, this means a user registered on one server can subscribe to forums on any other server, and can have discussions with users registered elsewhere.

View File

@ -1,5 +1,4 @@
# Goals
- Come up with a name / codename.
- Must have communities.
- Must have threaded comments.
@ -8,7 +7,6 @@
- Use websockets for post / gets to your own instance.
# Questions
- How does voting work? Should we go back to the old way of showing up and downvote counts? Or just a score?
- Decide on tech to be used
- Backend: Actix, Diesel.
@ -19,7 +17,10 @@
- On mobile, allow you to switch between them. Default?
# Resources / Potential Libraries
- Use the [activitypub crate.](https://docs.rs/activitypub/0.1.4/activitypub/)
- https://docs.rs/activitypub/0.1.4/activitypub/
- [Activitypub vocab.](https://www.w3.org/TR/activitystreams-vocabulary/)
- [Activitypub main](https://www.w3.org/TR/activitypub/)
- [Diesel to Postgres data types](https://kotiri.com/2018/01/31/postgresql-diesel-rust-types.html)
- [helpful diesel examples](http://siciarz.net/24-days-rust-diesel/)
- [Recursive query for adjacency list for nested comments](https://stackoverflow.com/questions/192220/what-is-the-most-efficient-elegant-way-to-parse-a-flat-table-into-a-tree/192462#192462)
@ -35,20 +36,9 @@
- [Temp Icon](https://www.flaticon.com/free-icon/mouse_194242)
- [Rust docker build](https://shaneutt.com/blog/rust-fast-small-docker-image-builds/)
- [Zurb mentions](https://github.com/zurb/tribute)
- [TippyJS](https://github.com/atomiks/tippyjs)
- Activitypub guides
- https://blog.joinmastodon.org/2018/06/how-to-implement-a-basic-activitypub-server/
- https://raw.githubusercontent.com/w3c/activitypub/gh-pages/activitypub-tutorial.txt
- https://github.com/tOkeshu/activitypub-example
- https://blog.joinmastodon.org/2018/07/how-to-make-friends-and-verify-requests/
## Activitypub guides
- https://blog.joinmastodon.org/2018/06/how-to-implement-a-basic-activitypub-server/
- https://raw.githubusercontent.com/w3c/activitypub/gh-pages/activitypub-tutorial.txt
- https://github.com/tOkeshu/activitypub-example
- https://blog.joinmastodon.org/2018/07/how-to-make-friends-and-verify-requests/
- Use the [activitypub crate.](https://docs.rs/activitypub/0.1.4/activitypub/)
- https://docs.rs/activitypub/0.1.4/activitypub/
- [Activitypub vocab.](https://www.w3.org/TR/activitystreams-vocabulary/)
- [Activitypub main](https://www.w3.org/TR/activitypub/)
- [Federation.md](https://github.com/dariusk/gathio/blob/7fc93dbe9d4d99457a0e85c6c532112f415b7af2/FEDERATION.md)
- [Activitypub implementers guide](https://socialhub.activitypub.rocks/t/draft-guide-for-new-activitypub-implementers/479)
- [Data storage questions](https://socialhub.activitypub.rocks/t/data-storage-questions/579/3)
- [Activitypub as it has been understood](https://flak.tedunangst.com/post/ActivityPub-as-it-has-been-understood)
- [Asonix http signatures in rust](https://git.asonix.dog/Aardwolf/http-signature-normalization)

View File

@ -1,24 +1,14 @@
# Lemmy Guide
# Markdown Guide
## Lemmy Specific
Start typing...
- `@a_user_name` to get a list of usernames.
- `!a_community` to get a list of communities.
- `#a_community` to get a list of communities.
- `:emoji` to get a list of emojis.
## Sorting
*Applies to both posts and comments*
Type | Description
--- | ---
Hot | Shows *trending* posts, based on the score, and the most recent comment time.
New | Newest posts.
Top | Shows the highest scoring posts in the given time frame.
For more detail, check the [Post and Comment Ranking details](about_ranking.md).
## Markdown Guide
## General Markdown
Type | Or | … to Get
--- | --- | ---
@ -37,4 +27,3 @@ Horizontal Rule <br>\--- | Horizontal Rule<br>\*\*\* | Horizontal Rule <br><hr>
::: spoiler hidden or nsfw stuff<br>*a bunch of spoilers here*<br>::: | | <details><summary> hidden or nsfw stuff </summary><p><em>a bunch of spoilers here</em></p></details>
[CommonMark Tutorial](https://commonmark.org/help/tutorial/)

View File

@ -18,7 +18,7 @@ Score = Upvotes - Downvotes
Time = time since submission (in hours)
Gravity = Decay gravity, 1.8 is default
```
- For posts, in order to bring up active posts, it uses the latest comment time (limited to a max creation age of a month ago)
- Use Max(1, score) to make sure all comments are affected by time decay.
- Add 3 to the score, so that everything that has less than 3 downvotes will seem new. Otherwise all new comments would stay at zero, near the bottom.
- The sign and abs of the score are necessary for dealing with the log of negative scores.
@ -26,4 +26,4 @@ Gravity = Decay gravity, 1.8 is default
A plot of rank over 24 hours, of scores of 1, 5, 10, 100, 1000, with a scale factor of 10k.
![](https://raw.githubusercontent.com/LemmyNet/lemmy/master/docs/img/rank_algorithm.png)
![](https://i.imgur.com/w8oBLlL.png)

View File

@ -1,44 +0,0 @@
# Backup and Restore Guide
## Docker and Ansible
When using docker or ansible, there should be a `volumes` folder, which contains both the database, and all the pictures. Copy this folder to the new instance to restore your data.
### Incremental Database backup
To incrementally backup the DB to an `.sql` file, you can run:
```bash
docker exec -t FOLDERNAME_postgres_1 pg_dumpall -c -U lemmy > lemmy_dump_`date +%Y-%m-%d"_"%H_%M_%S`.sql
```
### A Sample backup script
```bash
#!/bin/sh
# DB Backup
ssh MY_USER@MY_IP "docker exec -t FOLDERNAME_postgres_1 pg_dumpall -c -U lemmy" > ~/BACKUP_LOCATION/INSTANCE_NAME_dump_`date +%Y-%m-%d"_"%H_%M_%S`.sql
# Volumes folder Backup
rsync -avP -zz --rsync-path="sudo rsync" MY_USER@MY_IP:/LEMMY_LOCATION/volumes ~/BACKUP_LOCATION/FOLDERNAME
```
### Restoring the DB
If you need to restore from a `pg_dumpall` file, you need to first clear out your existing database
```bash
# Drop the existing DB
docker exec -i FOLDERNAME_postgres_1 psql -U lemmy -c "DROP SCHEMA public CASCADE; CREATE SCHEMA public;"
# Restore from the .sql backup
cat db_dump.sql | docker exec -i FOLDERNAME_postgres_1 psql -U lemmy # restores the db
# This also might be necessary when doing a db import with a different password.
docker exec -i FOLDERNAME_postgres_1 psql -U lemmy -c "alter user lemmy with password 'bleh'"
```
## More resources
- https://stackoverflow.com/questions/24718706/backup-restore-a-dockerized-postgresql-database

View File

@ -1,22 +1,13 @@
# Configuration
The configuration is based on the file
[defaults.hjson](https://yerbamate.dev/LemmyNet/lemmy/src/branch/master/server/config/defaults.hjson).
This file also contains documentation for all the available options. To override the defaults, you
can copy the options you want to change into your local `config.hjson` file.
The configuration is based on the file [defaults.hjson](server/config/defaults.hjson). This file also contains documentation for all the available options. To override the defaults, you can copy the options you want to change into your local `config.hjson` file.
To use a different `config.hjson` location than the current directory, set the environment variable `LEMMY_CONFIG_LOCATION`.
Additionally, you can override any config files with environment variables. These have the same name as the config options, and are prefixed with `LEMMY_`. For example, you can override the `database.password` with
`LEMMY__DATABASE__POOL_SIZE=10`.
Additionally, you can override any config files with environment variables. These have the same
name as the config options, and are prefixed with `LEMMY_`. For example, you can override the
`database.password` with `LEMMY_DATABASE__POOL_SIZE=10`.
An additional option `LEMMY_DATABASE_URL` is available, which can be used with a PostgreSQL connection string like `postgres://lemmy:password@lemmy_db:5432/lemmy`, passing all connection details at once.
An additional option `LEMMY_DATABASE_URL` is available, which can be used with a PostgreSQL
connection string like `postgres://lemmy:password@lemmy_db:5432/lemmy`, passing all connection
details at once.
If the Docker container is not used, manually create the database specified above by running the
following commands:
If the Docker container is not used, manually create the database specified above by running the following commands:
```bash
cd server

View File

@ -1,25 +1,13 @@
# Ansible Installation
This is the same as the [Docker installation](administration_install_docker.md), except that Ansible handles all of it automatically. It also does some extra things like setting up TLS and email for your Lemmy instance.
First, you need to [install Ansible on your local computer](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) (e.g. using `sudo apt install ansible`) or the equivalent for you platform.
Then run the following commands on your local computer:
```bash
git clone https://github.com/LemmyNet/lemmy.git
git clone https://github.com/dessalines/lemmy.git
cd lemmy/ansible/
cp inventory.example inventory
nano inventory # enter your server, domain, contact email
# If the command below fails, you may need to comment out this line
# In the ansible.cfg file:
# interpreter_python=/usr/bin/python3
ansible-playbook lemmy.yml --become
```
To update to a new version, just run the following in your local Lemmy repo:
```bash
git pull origin master
cd ansible
ansible-playbook lemmy.yml --become
```

View File

@ -1,41 +1,31 @@
# Docker Installation
Make sure you have both docker and docker-compose(>=`1.24.0`) installed. On Ubuntu, just run `apt install docker-compose docker.io`. Next,
Make sure you have both docker and docker-compose(>=`1.24.0`) installed:
```bash
# create a folder for the lemmy files. the location doesnt matter, you can put this anywhere you want
mkdir /lemmy
cd /lemmy
# download default config files
wget https://raw.githubusercontent.com/LemmyNet/lemmy/master/docker/prod/docker-compose.yml
wget https://raw.githubusercontent.com/LemmyNet/lemmy/master/docker/lemmy.hjson
wget https://raw.githubusercontent.com/LemmyNet/lemmy/master/docker/iframely.config.local.js
# Set correct permissions for pictrs folder
mkdir -p volumes/pictrs
sudo chown -R 991:991 volumes/pictrs
mkdir lemmy/
cd lemmy/
wget https://raw.githubusercontent.com/dessalines/lemmy/master/docker/prod/docker-compose.yml
wget https://raw.githubusercontent.com/dessalines/lemmy/master/docker/lemmy.hjson
wget https://raw.githubusercontent.com/dessalines/lemmy/master/docker/iframely.config.local.js
# Edit lemmy.hjson, and docker-compose.yml to do more configuration (like adding a custom password)
docker-compose up -d
```
After this, have a look at the [config file](administration_configuration.md) named `lemmy.hjson`, and adjust it, in particular the hostname, and possibly the db password. Then run:
and go to http://localhost:8536.
`docker-compose up -d`
To make Lemmy available outside the server, you need to setup a reverse proxy, like Nginx. [A sample nginx config](https://raw.githubusercontent.com/LemmyNet/lemmy/master/ansible/templates/nginx.conf), could be setup with:
[A sample nginx config](/ansible/templates/nginx.conf) (Note: Avatar / Image uploading won't work without this), could be setup with:
```bash
wget https://raw.githubusercontent.com/LemmyNet/lemmy/master/ansible/templates/nginx.conf
wget https://raw.githubusercontent.com/dessalines/lemmy/master/ansible/templates/nginx.conf
# Replace the {{ vars }}
sudo mv nginx.conf /etc/nginx/sites-enabled/lemmy.conf
```
You will also need to setup TLS, for example with [Let's Encrypt](https://letsencrypt.org/). After this you need to restart Nginx to reload the config.
## Updating
To update to the newest version, you can manually change the version in `docker-compose.yml`. Alternatively, fetch the latest version from our git repo:
To update to the newest version, run:
```bash
wget https://raw.githubusercontent.com/LemmyNet/lemmy/master/docker/prod/docker-compose.yml
wget https://raw.githubusercontent.com/dessalines/lemmy/master/docker/prod/docker-compose.yml
docker-compose up -d
```

View File

@ -2,16 +2,9 @@
Information about contributing to Lemmy, whether it is translating, testing, designing or programming.
## Issue tracking / Repositories
- [GitHub (for issues and pull requests)](https://github.com/LemmyNet/lemmy)
- [Gitea (only for pull requests)](https://yerbamate.dev/LemmyNet/lemmy)
- [GitLab (only code-mirror)](https://gitlab.com/dessalines/lemmy)
## Translating
Check out [Lemmy's Weblate](https://weblate.yerbamate.dev/projects/lemmy/) for translations.
Go [here](https://github.com/dessalines/lemmy#translations) for translation instructions.
## Architecture

View File

@ -3,22 +3,11 @@
## Running
```bash
sudo apt install git docker-compose
git clone https://github.com/LemmyNet/lemmy
git clone https://github.com/dessalines/lemmy
cd lemmy/docker/dev
sudo docker-compose up --no-deps --build
./docker_update.sh # This builds and runs it, updating for your changes
```
and go to http://localhost:8536.
To speed up the Docker compile, add the following to `/etc/docker/daemon.json` and restart Docker.
```
{
"features": {
"buildkit": true
}
}
```
If the build is still too slow, you will have to use a
[local build](contributing_local_development.md) instead.
Note that compile times when changing `Cargo.toml` are relatively long with Docker, because builds can't be incrementally cached. If this is a problem for you, you should use [Local Development](contributing_local_development.md).

View File

@ -1,70 +0,0 @@
# Federation Development
## Setup
If you don't have a local clone of the Lemmy repo yet, just run the following command:
```bash
git clone https://github.com/LemmyNet/lemmy
```
## Running locally
You need to have the following packages installed, the Docker service needs to be running.
- docker
- docker-compose
- cargo
- yarn
Then run the following
```bash
cd docker/federation
./run-federation-test.bash -yarn
```
The federation test sets up 3 instances:
Instance / Username | Location
--- | ---
lemmy_alpha | [127.0.0.1:8540](http://127.0.0.1:8540)
lemmy_beta | [127.0.0.1:8550](http://127.0.0.1:8550)
lemmy_gamma | [127.0.0.1:8560](http://127.0.0.1:8560)
You can log into each using the instance name, and `lemmy` as the password, IE (`lemmy_alpha`, `lemmy`).
Firefox containers are a good way to test them interacting.
## Integration tests
To run a suite of suite of federation integration tests:
```bash
cd docker/federation-test
./run-tests.sh
```
## Running on a server
Note that federation is currently in alpha. **Only use it for testing**, not on any production server, and be aware that turning on federation may break your instance.
Follow the normal installation instructions, either with [Ansible](administration_install_ansible.md) or
[manually](administration_install_docker.md). Then replace the line `image: dessalines/lemmy:v0.x.x` in
`/lemmy/docker-compose.yml` with `image: dessalines/lemmy:federation`. Also add the following in
`/lemmy/lemmy.hjson`:
```
federation: {
enabled: true
tls_enabled: true,
allowed_instances: example.com,
}
```
Afterwards, and whenever you want to update to the latest version, run these commands on the server:
```
cd /lemmy/
sudo docker-compose pull
sudo docker-compose up -d
```

View File

@ -1,67 +1,31 @@
### Ubuntu
#### Requirements
- [Rust](https://www.rust-lang.org/)
- [Yarn](https://yarnpkg.com/en/)
- [Postgres](https://www.postgresql.org/)
#### Build requirements:
```
sudo apt install git cargo libssl-dev pkg-config libpq-dev yarn curl gnupg2 git
# install yarn
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt update && sudo apt install yarn
```
#### Set up Postgres DB
#### Get the source code
```
git clone https://github.com/LemmyNet/lemmy.git
# or alternatively from gitea
# git clone https://yerbamate.dev/LemmyNet/lemmy.git
```
All the following commands need to be run either in `lemmy/server` or `lemmy/ui`, as indicated
by the `cd` command.
#### Build the backend (Rust)
```
```bash
cd server
cargo build
# for development, use `cargo check` instead)
./db-init.sh
```
#### Build the frontend (Typescript)
```
cd ui
yarn
yarn build
```
Or run the commands manually:
#### Setup postgresql
```
sudo apt install postgresql
sudo systemctl start postgresql
# initialize postgres database
sudo -u postgres psql -c "create user lemmy with password 'password' superuser;" -U postgres
sudo -u postgres psql -c 'create database lemmy with owner lemmy;' -U postgres
```bash
psql -c "create user lemmy with password 'password' superuser;" -U postgres
psql -c 'create database lemmy with owner lemmy;' -U postgres
export LEMMY_DATABASE_URL=postgres://lemmy:password@localhost:5432/lemmy
# or execute server/db-init.sh
```
#### Run a local development instance
```
# run each of these in a seperate terminal
cd server && cargo run
ui & yarn start
```
#### Running
Then open [localhost:4444](http://localhost:4444) in your browser. It will auto-refresh if you edit
any frontend files. For backend coding, you will have to rerun `cargo run`. You can use
`cargo check` as a faster way to find compilation errors.
To speed up incremental builds, you can add the following to `~/.cargo/config`:
```bash
git clone https://github.com/dessalines/lemmy
cd lemmy
./install.sh
# For live coding, where both the front and back end, automagically reload on any save, do:
# cd ui && yarn start
# cd server && cargo watch -x run
```
[target.x86_64-unknown-linux-gnu]
rustflags = ["-Clink-arg=-fuse-ld=lld"]
```
Note that this setup doesn't include image uploads or link previews (provided by pict-rs and
iframely respectively). If you want to test those, you should use the
[Docker development](contributing_docker_development.md).

View File

@ -1,16 +0,0 @@
### Tests
#### Rust
After installing [local development dependencies](contributing_local_development.md), run the
following commands in the `server` subfolder:
```bash
psql -U lemmy -c "DROP SCHEMA public CASCADE; CREATE SCHEMA public;"
./test.sh
```
### Federation
Install the [Docker development dependencies](contributing_docker_development.md), and execute
`docker/federation-test/run-tests.sh`

View File

@ -1,18 +0,0 @@
# Theming Guide
Lemmy uses [Bootstrap v4](https://getbootstrap.com/), and very few custom css classes, so any bootstrap v4 compatible theme should work fine.
## Creating
- Use a tool like [bootstrap.build](https://bootstrap.build/) to create a bootstrap v4 theme. Export the `bootstrap.min.css` once you're done, and save the `_variables.scss` too.
## Testing
- To test out a theme, you can either use your browser's web tools, or a plugin like stylus to copy-paste a theme, when viewing Lemmy.
## Adding
1. Copy `{my-theme-name}.min.css` to `ui/assets/css/themes`. (You can also copy the `_variables.scss` here if you want).
1. Go to `ui/src/utils.ts` and add `{my-theme-name}` to the themes list.
1. Test locally
1. Do a pull request with those changes.

View File

@ -1,6 +1,6 @@
# Lemmy API
*Note: this may lag behind the actual API endpoints [here](../server/src/api). The API should be considered unstable and may change any time.*
*Note: this may lag behind the actual API endpoints [here](../server/src/api).*
<!-- toc -->
@ -92,93 +92,85 @@
- [Request](#request-17)
- [Response](#response-17)
- [HTTP](#http-18)
+ [Get Site Config](#get-site-config)
* [Community](#community)
+ [Get Community](#get-community)
- [Request](#request-18)
- [Response](#response-18)
- [HTTP](#http-19)
+ [Save Site Config](#save-site-config)
+ [Create Community](#create-community)
- [Request](#request-19)
- [Response](#response-19)
- [HTTP](#http-20)
* [Community](#community)
+ [Get Community](#get-community)
+ [List Communities](#list-communities)
- [Request](#request-20)
- [Response](#response-20)
- [HTTP](#http-21)
+ [Create Community](#create-community)
+ [Ban from Community](#ban-from-community)
- [Request](#request-21)
- [Response](#response-21)
- [HTTP](#http-22)
+ [List Communities](#list-communities)
+ [Add Mod to Community](#add-mod-to-community)
- [Request](#request-22)
- [Response](#response-22)
- [HTTP](#http-23)
+ [Ban from Community](#ban-from-community)
+ [Edit Community](#edit-community)
- [Request](#request-23)
- [Response](#response-23)
- [HTTP](#http-24)
+ [Add Mod to Community](#add-mod-to-community)
+ [Follow Community](#follow-community)
- [Request](#request-24)
- [Response](#response-24)
- [HTTP](#http-25)
+ [Edit Community](#edit-community)
+ [Get Followed Communities](#get-followed-communities)
- [Request](#request-25)
- [Response](#response-25)
- [HTTP](#http-26)
+ [Follow Community](#follow-community)
+ [Transfer Community](#transfer-community)
- [Request](#request-26)
- [Response](#response-26)
- [HTTP](#http-27)
+ [Get Followed Communities](#get-followed-communities)
* [Post](#post)
+ [Create Post](#create-post)
- [Request](#request-27)
- [Response](#response-27)
- [HTTP](#http-28)
+ [Transfer Community](#transfer-community)
+ [Get Post](#get-post)
- [Request](#request-28)
- [Response](#response-28)
- [HTTP](#http-29)
* [Post](#post)
+ [Create Post](#create-post)
+ [Get Posts](#get-posts)
- [Request](#request-29)
- [Response](#response-29)
- [HTTP](#http-30)
+ [Get Post](#get-post)
+ [Create Post Like](#create-post-like)
- [Request](#request-30)
- [Response](#response-30)
- [HTTP](#http-31)
+ [Get Posts](#get-posts)
+ [Edit Post](#edit-post)
- [Request](#request-31)
- [Response](#response-31)
- [HTTP](#http-32)
+ [Create Post Like](#create-post-like)
+ [Save Post](#save-post)
- [Request](#request-32)
- [Response](#response-32)
- [HTTP](#http-33)
+ [Edit Post](#edit-post)
* [Comment](#comment)
+ [Create Comment](#create-comment)
- [Request](#request-33)
- [Response](#response-33)
- [HTTP](#http-34)
+ [Save Post](#save-post)
+ [Edit Comment](#edit-comment)
- [Request](#request-34)
- [Response](#response-34)
- [HTTP](#http-35)
* [Comment](#comment)
+ [Create Comment](#create-comment)
+ [Save Comment](#save-comment)
- [Request](#request-35)
- [Response](#response-35)
- [HTTP](#http-36)
+ [Edit Comment](#edit-comment)
+ [Create Comment Like](#create-comment-like)
- [Request](#request-36)
- [Response](#response-36)
- [HTTP](#http-37)
+ [Save Comment](#save-comment)
- [Request](#request-37)
- [Response](#response-37)
- [HTTP](#http-38)
+ [Create Comment Like](#create-comment-like)
- [Request](#request-38)
- [Response](#response-38)
- [HTTP](#http-39)
* [RSS / Atom feeds](#rss--atom-feeds)
+ [All](#all)
+ [Community](#community-1)
@ -787,53 +779,6 @@ Search types are `All, Comments, Posts, Communities, Users, Url`
`POST /site/transfer`
#### Get Site Config
##### Request
```rust
{
op: "GetSiteConfig",
data: {
auth: String
}
}
```
##### Response
```rust
{
op: "GetSiteConfig",
data: {
config_hjson: String,
}
}
```
##### HTTP
`GET /site/config`
#### Save Site Config
##### Request
```rust
{
op: "SaveSiteConfig",
data: {
config_hjson: String,
auth: String
}
}
```
##### Response
```rust
{
op: "SaveSiteConfig",
data: {
config_hjson: String,
}
}
```
##### HTTP
`PUT /site/config`
### Community
#### Get Community
##### Request
@ -1149,7 +1094,6 @@ Post listing types are `All, Subscribed, Community`
page: Option<i64>,
limit: Option<i64>,
community_id: Option<i32>,
community_name: Option<String>,
auth: Option<String>
}
}

View File

@ -1,79 +0,0 @@
# Lemmy Council
- A group of lemmy developers and users that use a well-defined democratic process to steer the project in a positive direction, keep it aligned to community goals, and resolve conflicts.
- Council members are also added as administrators to any official Lemmy instances.
## 1. What gets voted on
This section describes all the aspects of Lemmy where the council has decision making power, namely:
- Coding direction
- Priorities / Emphasis
- Controversial features (For example, an unpopular feature should be removed)
- Moderation and conflict resolution on:
- [dev.lemmy.ml](https://dev.lemmy.ml/)
- [github.com/LemmyNet/lemmy](https://github.com/LemmyNet/lemmy)
- [yerbamate.dev/LemmyNet/lemmy](https://yerbamate.dev/LemmyNet/lemmy)
- [weblate.yerbamate.dev/projects/lemmy/](https://weblate.yerbamate.dev/projects/lemmy/)
- Technical administration of dev.lemmy.ml
- Official Lemmy accounts
- [Mastodon](https://mastodon.social/@LemmyDev)
- [Liberapay](https://liberapay.com/Lemmy/)
- [Patreon](https://www.patreon.com/dessalines)
- Council membership changes
- Changes to these rules
## 2. Feedback and Activity Reports
Every week, the council should make a thread on Lemmy that details its activity during the past week, be it development, moderation, or anything else mentioned in 1.
At the same time, users can give feedback and suggestions in this thread. This should be taken into account by the council. Council members can call for a vote on any controversial issues, if they can't be resolved by discussion.
## 2. Voting Process
Most of the time, we keep each other up to date through the Matrix chat, and take informal decisions on uncontroversial issues. For example, a user clearly violating the site rules could be banned by a single person, or ideally after discussing it with at least one other member.
If an issue can not be resolved in this way, then any council member can call for a vote, which works in the following way:
- Any council member can call for a vote, on any topic mentioned in 1.
- This should be used if there is any controversy in the community, or between council members.
- Before taking any decision, there needs to be a discussion where every council member can
explain their position.
- Discussion should be taken with the goal of reaching a compromise that is acceptable for
everyone.
- After the discussion, voting is done through Matrix emojis (👍: yes, 👎: no, X: abstain) and must
stay open for at least two days.
- All members of the Lemmy council have equal voting power.
- Decisions should be reached unanimously, or nearly so. If this is not possible, at least
2/3 of votes must be in favour for the motion to pass.
- Once a decision is reached in this way, every member needs to abide by it.
## 4. Joining
- We use the following process: anyone who is active around Lemmy can recommend any other active person to join the council. This has to be approved by a majority of the council.
- Active users are defined as those who contribute to Lemmy in some way for at least an hour per week on average, doing things like reporting bugs, discussing rules and features, translating, promoting, developing, or doing other things that aim to improve Lemmy as a whole.
-> people should have joined at least a month ago.
- The member list is public.
- Note: we would like to have a process where community members can elect candidates for the council, but this is not realistic because a single user could easily create multiple accounts and cheat the vote.
- Limit growth to one new member per month at most.
## 5. Removing members
- Inactive members should be removed from the council after a few months of inactivity, and after receiving a notification about this.
- Members that dont follow binding council decisions should be removed.
- Any member can be removed in a vote.
## 6. Goals
- We encourage the membership of groups such as LGBT, religious or ethnic minorities, abuse victims, etc etc, and strive to create a safe space for them to express their opinions. We also support measures to increase participation by the previously mentioned groups.
- The following are banned, and will always be harshly punished: fascism, abuse, racism, sexism, etc etc,
## 7. Communication
- A private Matrix chat for all council members.
- (Once private communities are done) A private community on dev.lemmy.ml for issues.
## 8. Member List / Contact Info
General Contact [@LemmyDev Mastodon](https://mastodon.social/@LemmyDev)
- [Dessalines](https://dev.lemmy.ml/u/dessalines)
- [Nutomic](https://dev.lemmy.ml/u/nutomic)
- [AgreeableLandscape](https://dev.lemmy.ml/u/AgreeableLandscape)
- [fruechtchen](https://dev.lemmy.ml/u/fruechtchen)
- [kixiQu](https://dev.lemmy.ml/u/kixiQu)

73
install.sh vendored
View File

@ -10,55 +10,25 @@ export LEMMY_DATABASE_URL=postgres://lemmy:password@localhost:5432/lemmy
export JWT_SECRET=changeme
export HOSTNAME=rrr
yes_no_prompt_invalid() {
echo "Invalid input. Please enter either \"y\" or \"n\"." 1>&2
}
ask_to_init_db() {
init_db_valid=0
init_db_final=0
while [ "$init_db_valid" == 0 ]
do
read -p "Initialize database (y/n)? " init_db
case "$init_db" in
[yY]* ) init_db_valid=1; init_db_final=1;;
[nN]* ) init_db_valid=1; init_db_final=0;;
* ) yes_no_prompt_invalid;;
esac
echo
done
if [ "$init_db_final" = 1 ]
then
source ./server/db-init.sh
read -n 1 -s -r -p "Press ANY KEY to continue execution of this script, press CTRL+C to quit..."
echo
fi
}
ask_to_auto_reload() {
auto_reload_valid=0
auto_reload_final=0
while [ "$auto_reload_valid" == 0 ]
do
echo "Automagically reload the project when source files are changed?"
echo "ONLY ENABLE THIS FOR DEVELOPMENT!"
read -p "(y/n) " auto_reload
case "$auto_reload" in
[yY]* ) auto_reload_valid=1; auto_reload_final=1;;
[nN]* ) auto_reload_valid=1; auto_reload_final=0;;
* ) yes_no_prompt_invalid;;
esac
echo
done
if [ "$auto_reload_final" = 1 ]
then
cd ui && yarn start
cd server && cargo watch -x run
fi
}
# Optionally initialize the database
ask_to_init_db
init_db_valid=0
init_db_final=0
while [ "$init_db_valid" == 0 ]
do
read -p "Initialize database (y/n)? " init_db
case "${init_db,,}" in
y|yes ) init_db_valid=1; init_db_final=1;;
n|no ) init_db_valid=1; init_db_final=0;;
* ) echo "Invalid input" 1>&2;;
esac
echo
done
if [ "$init_db_final" = 1 ]
then
source ./server/db-init.sh
read -n 1 -s -r -p "Press ANY KEY to continue execution of this script, press CTRL+C to quit..."
echo
fi
# Build the web client
cd ui
@ -67,7 +37,8 @@ yarn build
# Build and run the backend
cd ../server
RUST_LOG=debug cargo run
cargo run
# For live coding, where both the front and back end, automagically reload on any save
ask_to_auto_reload
# For live coding, where both the front and back end, automagically reload on any save, do:
# cd ui && yarn start
# cd server && cargo watch -x run

View File

@ -1,5 +1,2 @@
tab_spaces = 2
edition="2018"
imports_layout="HorizontalVertical"
merge_imports=true
reorder_imports=true
edition="2018"

3225
server/Cargo.lock generated vendored

File diff suppressed because it is too large Load Diff

61
server/Cargo.toml vendored
View File

@ -1,54 +1,35 @@
[package]
name = "lemmy_server"
version = "0.0.1"
authors = ["Dessalines <happydooby@gmail.com>"]
edition = "2018"
[profile.release]
lto = true
[workspace]
members = [
"lemmy_utils",
"lemmy_db"
]
[dependencies]
lemmy_utils = { path = "./lemmy_utils" }
lemmy_db = { path = "./lemmy_db" }
diesel = "1.4.4"
diesel = { version = "1.4.2", features = ["postgres","chrono", "r2d2"] }
diesel_migrations = "1.4.0"
dotenv = "0.15.0"
activitystreams = "0.6.2"
activitystreams-new = { git = "https://git.asonix.dog/asonix/activitystreams-sketch" }
activitystreams-ext = { git = "https://git.asonix.dog/asonix/activitystreams-ext" }
bcrypt = "0.8.0"
bcrypt = "0.6.1"
activitypub = "0.2.0"
chrono = { version = "0.4.7", features = ["serde"] }
serde_json = { version = "1.0.52", features = ["preserve_order"]}
failure = "0.1.8"
serde = { version = "1.0.105", features = ["derive"] }
actix = "0.10.0-alpha.2"
actix-web = { version = "3.0.0-alpha.3", features = ["rustls"] }
actix-files = "0.3.0-alpha.1"
actix-web-actors = "3.0.0-alpha.1"
actix-rt = "1.1.1"
awc = "2.0.0-alpha.2"
log = "0.4.0"
failure = "0.1.5"
serde_json = { version = "1.0.45", features = ["preserve_order"]}
serde = { version = "1.0.94", features = ["derive"] }
actix = "0.9.0"
actix-web = "2.0.0"
actix-files = "0.2.1"
actix-web-actors = "2.0.0"
actix-rt = "1.0.0"
env_logger = "0.7.1"
rand = "0.7.3"
strum = "0.18.0"
strum_macros = "0.18.0"
strum = "0.17.1"
strum_macros = "0.17.1"
jsonwebtoken = "7.0.1"
regex = "1.3.4"
lazy_static = "1.3.0"
lettre = "0.9.2"
lettre_email = "0.9.2"
sha2 = "0.8.1"
rss = "1.9.0"
url = { version = "2.1.1", features = ["serde"] }
percent-encoding = "2.1.0"
openssl = "0.10"
http = "0.2.1"
http-signature-normalization-actix = { version = "0.4.0-alpha.2", default-features = false, features = ["sha-2"] }
base64 = "0.12.1"
tokio = "0.2.21"
futures = "0.3.5"
itertools = "0.9.0"
uuid = { version = "0.8", features = ["serde", "v4"] }
sha2 = "0.9"
async-trait = "0.1.36"
htmlescape = "0.3.1"
config = "0.10.1"
hjson = "0.8.2"

View File

@ -1,4 +0,0 @@
{
hostname: "localhost:8536"
federation_enabled: true
}

View File

@ -1,15 +1,4 @@
{
# # optional: parameters for automatic configuration of new instance (only used at first start)
# setup: {
# # username for the admin user
# admin_username: ""
# # password for the admin user
# admin_password: ""
# # optional: email for the admin user (can be omitted and set later through the website)
# admin_email: ""
# # name of the site (can be changed later)
# site_name: ""
# }
# settings related to the postgresql database
database: {
# username to connect to postgres
@ -26,15 +15,18 @@
pool_size: 5
}
# the domain name of your instance (eg "dev.lemmy.ml")
hostname: null
hostname: "my_domain"
# address where lemmy should listen for incoming requests
bind: "0.0.0.0"
# port where lemmy should listen for incoming requests
port: 8536
# json web token for authorization between server and client
jwt_secret: "changeme"
# The location of the frontend
# The dir for the front end
front_end_dir: "../ui/dist"
# whether to enable activitypub federation. this feature is in alpha, do not enable in production, as might
# cause problems like remote instances fetching and permanently storing bad data.
federation_enabled: false
# rate limits for various user actions, by user ip
rate_limit: {
# maximum number of messages created in interval
@ -50,26 +42,15 @@
# interval length for registration limit
register_per_second: 3600
}
# settings related to activitypub federation
federation: {
# whether to enable activitypub federation. this feature is in alpha, do not enable in production.
enabled: false
# whether tls is required for activitypub. only disable this for debugging, never for producion.
tls_enabled: true
# comma seperated list of instances with which federation is allowed
allowed_instances: ""
}
# # email sending configuration
# email: {
# # hostname and port of the smtp server
# # hostname of the smtp server
# smtp_server: ""
# # login name for smtp server
# smtp_login: ""
# # password to login to the smtp server
# smtp_password: ""
# # address to send emails from, eg "noreply@your-instance.com"
# # address to send emails from, eg "info@your-instance.com"
# smtp_from_address: ""
# # whether or not smtp connections should use tls
# use_tls: true
# }
}

112
server/db-init.sh vendored
View File

@ -1,107 +1,43 @@
#!/bin/bash
set -e
# Default configurations
username=lemmy
dbname=lemmy
port=5432
yes_no_prompt_invalid() {
echo "Invalid input. Please enter either \"y\" or \"n\"." 1>&2
}
password=""
password_confirm=""
password_valid=0
print_config() {
echo " database name: $dbname"
echo " username: $username"
echo " port: $port"
}
ask_for_db_config() {
echo "The default database configuration is:"
print_config
while [ "$password_valid" == 0 ]
do
read -p "Enter database password: " -s password
echo
default_config_final=0
default_config_valid=0
while [ "$default_config_valid" == 0 ]
do
read -p "Use this configuration (y/n)? " default_config
case "$default_config" in
[yY]* ) default_config_valid=1; default_config_final=1;;
[nN]* ) default_config_valid=1; default_config_final=0;;
* ) yes_no_prompt_invalid;;
esac
echo
done
read -p "Verify database password: " -s password_confirm
echo
echo
if [ "$default_config_final" == 0 ]
# Start the loop from the top if either check fails
if [ -z "$password" ]
then
config_ok_final=0
while [ "$config_ok_final" == 0 ]
do
read -p "Database name: " dbname
read -p "Username: " username
read -p "Port: " port
#echo
#echo "The database configuration is:"
#print_config
#echo
config_ok_valid=0
while [ "$config_ok_valid" == 0 ]
do
read -p "Use this configuration (y/n)? " config_ok
case "$config_ok" in
[yY]* ) config_ok_valid=1; config_ok_final=1;;
[nN]* ) config_ok_valid=1; config_ok_final=0;;
* ) yes_no_prompt_invalid;;
esac
echo
done
done
echo "Error: Password cannot be empty." 1>&2
echo
continue
fi
}
ask_for_password() {
password=""
password_confirm=""
password_valid=0
while [ "$password_valid" == 0 ]
do
read -p "Enter database password: " -s password
if [ "$password" != "$password_confirm" ]
then
echo "Error: Passwords don't match." 1>&2
echo
continue
fi
read -p "Verify database password: " -s password_confirm
echo
echo
# Set the password_valid variable to break out of the loop
password_valid=1
done
# Start the loop from the top if either check fails
if [ -z "$password" ]
then
echo "Error: Password cannot be empty." 1>&2
echo
continue
fi
if [ "$password" != "$password_confirm" ]
then
echo "Error: Passwords don't match." 1>&2
echo
continue
fi
# Set the password_valid variable to break out of the loop
password_valid=1
done
}
ask_for_db_config
ask_for_password
psql -c "CREATE USER $username WITH PASSWORD '$password' SUPERUSER;" -U postgres
psql -c "CREATE DATABASE $dbname WITH OWNER $username;" -U postgres
psql -c 'CREATE DATABASE $dbname WITH OWNER $username;' -U postgres
export LEMMY_DATABASE_URL=postgres://$username:$password@localhost:$port/$dbname
echo "The database URL is $LEMMY_DATABASE_URL"
echo $LEMMY_DATABASE_URL

2
server/diesel.toml vendored
View File

@ -2,4 +2,4 @@
# see diesel.rs/guides/configuring-diesel-cli
[print_schema]
file = "lemmy_db/src/schema.rs"
file = "src/schema.rs"

View File

@ -1,15 +0,0 @@
[package]
name = "lemmy_db"
version = "0.1.0"
edition = "2018"
[dependencies]
diesel = { version = "1.4.4", features = ["postgres","chrono","r2d2","64-column-tables","serde_json"] }
chrono = { version = "0.4.7", features = ["serde"] }
serde = { version = "1.0.105", features = ["derive"] }
serde_json = { version = "1.0.52", features = ["preserve_order"]}
strum = "0.18.0"
strum_macros = "0.18.0"
log = "0.4.0"
sha2 = "0.9"
bcrypt = "0.8.0"

View File

@ -1,165 +0,0 @@
use crate::{schema::activity, Crud};
use diesel::{dsl::*, result::Error, *};
use log::debug;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::{
fmt::Debug,
io::{Error as IoError, ErrorKind},
};
#[derive(Queryable, Identifiable, PartialEq, Debug, Serialize, Deserialize)]
#[table_name = "activity"]
pub struct Activity {
pub id: i32,
pub user_id: i32,
pub data: Value,
pub local: bool,
pub published: chrono::NaiveDateTime,
pub updated: Option<chrono::NaiveDateTime>,
}
#[derive(Insertable, AsChangeset, Clone, Serialize, Deserialize)]
#[table_name = "activity"]
pub struct ActivityForm {
pub user_id: i32,
pub data: Value,
pub local: bool,
pub updated: Option<chrono::NaiveDateTime>,
}
impl Crud<ActivityForm> for Activity {
fn read(conn: &PgConnection, activity_id: i32) -> Result<Self, Error> {
use crate::schema::activity::dsl::*;
activity.find(activity_id).first::<Self>(conn)
}
fn delete(conn: &PgConnection, activity_id: i32) -> Result<usize, Error> {
use crate::schema::activity::dsl::*;
diesel::delete(activity.find(activity_id)).execute(conn)
}
fn create(conn: &PgConnection, new_activity: &ActivityForm) -> Result<Self, Error> {
use crate::schema::activity::dsl::*;
insert_into(activity)
.values(new_activity)
.get_result::<Self>(conn)
}
fn update(
conn: &PgConnection,
activity_id: i32,
new_activity: &ActivityForm,
) -> Result<Self, Error> {
use crate::schema::activity::dsl::*;
diesel::update(activity.find(activity_id))
.set(new_activity)
.get_result::<Self>(conn)
}
}
pub fn do_insert_activity<T>(
conn: &PgConnection,
user_id: i32,
data: &T,
local: bool,
) -> Result<Activity, IoError>
where
T: Serialize + Debug,
{
debug!("inserting activity for user {}, data {:?}", user_id, &data);
let activity_form = ActivityForm {
user_id,
data: serde_json::to_value(&data)?,
local,
updated: None,
};
let result = Activity::create(&conn, &activity_form);
match result {
Ok(s) => Ok(s),
Err(e) => Err(IoError::new(
ErrorKind::Other,
format!("Failed to insert activity into database: {}", e),
)),
}
}
#[cfg(test)]
mod tests {
use crate::{
activity::{Activity, ActivityForm},
tests::establish_unpooled_connection,
user::{UserForm, User_},
Crud,
ListingType,
SortType,
};
use serde_json::Value;
#[test]
fn test_crud() {
let conn = establish_unpooled_connection();
let creator_form = UserForm {
name: "activity_creator_pm".into(),
preferred_username: None,
password_encrypted: "nope".into(),
email: None,
matrix_user_id: None,
avatar: None,
admin: false,
banned: false,
updated: None,
show_nsfw: false,
theme: "darkly".into(),
default_sort_type: SortType::Hot as i16,
default_listing_type: ListingType::Subscribed as i16,
lang: "browser".into(),
show_avatars: true,
send_notifications_to_email: false,
actor_id: "http://fake.com".into(),
bio: None,
local: true,
private_key: None,
public_key: None,
last_refreshed_at: None,
};
let inserted_creator = User_::create(&conn, &creator_form).unwrap();
let test_json: Value = serde_json::from_str(
r#"{
"street": "Article Circle Expressway 1",
"city": "North Pole",
"postcode": "99705",
"state": "Alaska"
}"#,
)
.unwrap();
let activity_form = ActivityForm {
user_id: inserted_creator.id,
data: test_json.to_owned(),
local: true,
updated: None,
};
let inserted_activity = Activity::create(&conn, &activity_form).unwrap();
let expected_activity = Activity {
id: inserted_activity.id,
user_id: inserted_creator.id,
data: test_json,
local: true,
published: inserted_activity.published,
updated: None,
};
let read_activity = Activity::read(&conn, inserted_activity.id).unwrap();
let num_deleted = Activity::delete(&conn, inserted_activity.id).unwrap();
User_::delete(&conn, inserted_creator.id).unwrap();
assert_eq!(expected_activity, read_activity);
assert_eq!(expected_activity, inserted_activity);
assert_eq!(1, num_deleted);
}
}

View File

@ -1,22 +0,0 @@
[package]
name = "lemmy_utils"
version = "0.1.0"
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
regex = "1.3.5"
config = { version = "0.10.1", default-features = false, features = ["hjson"] }
chrono = { version = "0.4.7", features = ["serde"] }
lettre = "0.9.3"
lettre_email = "0.9.4"
log = "0.4.0"
itertools = "0.9.0"
rand = "0.7.3"
serde = { version = "1.0.105", features = ["derive"] }
serde_json = { version = "1.0.52", features = ["preserve_order"]}
comrak = "0.7"
lazy_static = "1.3.0"
openssl = "0.10"
url = { version = "2.1.1", features = ["serde"] }

View File

@ -1,337 +0,0 @@
#[macro_use]
pub extern crate lazy_static;
pub extern crate comrak;
pub extern crate lettre;
pub extern crate lettre_email;
pub extern crate openssl;
pub extern crate rand;
pub extern crate regex;
pub extern crate serde_json;
pub extern crate url;
pub mod settings;
use crate::settings::Settings;
use chrono::{DateTime, FixedOffset, Local, NaiveDateTime, Utc};
use itertools::Itertools;
use lettre::{
smtp::{
authentication::{Credentials, Mechanism},
extension::ClientId,
ConnectionReuseParameters,
},
ClientSecurity,
SmtpClient,
Transport,
};
use lettre_email::Email;
use openssl::{pkey::PKey, rsa::Rsa};
use rand::{distributions::Alphanumeric, thread_rng, Rng};
use regex::{Regex, RegexBuilder};
use std::io::{Error, ErrorKind};
use url::Url;
pub fn to_datetime_utc(ndt: NaiveDateTime) -> DateTime<Utc> {
DateTime::<Utc>::from_utc(ndt, Utc)
}
pub fn naive_from_unix(time: i64) -> NaiveDateTime {
NaiveDateTime::from_timestamp(time, 0)
}
pub fn convert_datetime(datetime: NaiveDateTime) -> DateTime<FixedOffset> {
let now = Local::now();
DateTime::<FixedOffset>::from_utc(datetime, *now.offset())
}
pub fn is_email_regex(test: &str) -> bool {
EMAIL_REGEX.is_match(test)
}
pub fn remove_slurs(test: &str) -> String {
SLUR_REGEX.replace_all(test, "*removed*").to_string()
}
pub fn slur_check(test: &str) -> Result<(), Vec<&str>> {
let mut matches: Vec<&str> = SLUR_REGEX.find_iter(test).map(|mat| mat.as_str()).collect();
// Unique
matches.sort_unstable();
matches.dedup();
if matches.is_empty() {
Ok(())
} else {
Err(matches)
}
}
pub fn slurs_vec_to_str(slurs: Vec<&str>) -> String {
let start = "No slurs - ";
let combined = &slurs.join(", ");
[start, combined].concat()
}
pub fn generate_random_string() -> String {
thread_rng().sample_iter(&Alphanumeric).take(30).collect()
}
pub fn send_email(
subject: &str,
to_email: &str,
to_username: &str,
html: &str,
) -> Result<(), String> {
let email_config = Settings::get().email.ok_or("no_email_setup")?;
let email = Email::builder()
.to((to_email, to_username))
.from(email_config.smtp_from_address.to_owned())
.subject(subject)
.html(html)
.build()
.unwrap();
let mailer = if email_config.use_tls {
SmtpClient::new_simple(&email_config.smtp_server).unwrap()
} else {
SmtpClient::new(&email_config.smtp_server, ClientSecurity::None).unwrap()
}
.hello_name(ClientId::Domain(Settings::get().hostname))
.smtp_utf8(true)
.authentication_mechanism(Mechanism::Plain)
.connection_reuse(ConnectionReuseParameters::ReuseUnlimited);
let mailer = if let (Some(login), Some(password)) =
(&email_config.smtp_login, &email_config.smtp_password)
{
mailer.credentials(Credentials::new(login.to_owned(), password.to_owned()))
} else {
mailer
};
let mut transport = mailer.transport();
let result = transport.send(email.into());
transport.close();
match result {
Ok(_) => Ok(()),
Err(e) => Err(e.to_string()),
}
}
pub fn markdown_to_html(text: &str) -> String {
comrak::markdown_to_html(text, &comrak::ComrakOptions::default())
}
// TODO nothing is done with community / group webfingers yet, so just ignore those for now
#[derive(Clone, PartialEq, Eq, Hash)]
pub struct MentionData {
pub name: String,
pub domain: String,
}
impl MentionData {
pub fn is_local(&self) -> bool {
Settings::get().hostname.eq(&self.domain)
}
pub fn full_name(&self) -> String {
format!("@{}@{}", &self.name, &self.domain)
}
}
pub fn scrape_text_for_mentions(text: &str) -> Vec<MentionData> {
let mut out: Vec<MentionData> = Vec::new();
for caps in MENTIONS_REGEX.captures_iter(text) {
out.push(MentionData {
name: caps["name"].to_string(),
domain: caps["domain"].to_string(),
});
}
out.into_iter().unique().collect()
}
pub fn is_valid_username(name: &str) -> bool {
VALID_USERNAME_REGEX.is_match(name)
}
pub fn is_valid_community_name(name: &str) -> bool {
VALID_COMMUNITY_NAME_REGEX.is_match(name)
}
pub fn is_valid_post_title(title: &str) -> bool {
VALID_POST_TITLE_REGEX.is_match(title)
}
#[cfg(test)]
mod tests {
use crate::{
is_email_regex,
is_valid_community_name,
is_valid_post_title,
is_valid_username,
remove_slurs,
scrape_text_for_mentions,
slur_check,
slurs_vec_to_str,
};
#[test]
fn test_mentions_regex() {
let text = "Just read a great blog post by [@tedu@honk.teduangst.com](/u/test). And another by !test_community@fish.teduangst.com . Another [@lemmy@lemmy-alpha:8540](/u/fish)";
let mentions = scrape_text_for_mentions(text);
assert_eq!(mentions[0].name, "tedu".to_string());
assert_eq!(mentions[0].domain, "honk.teduangst.com".to_string());
assert_eq!(mentions[1].domain, "lemmy-alpha:8540".to_string());
}
#[test]
fn test_email() {
assert!(is_email_regex("gush@gmail.com"));
assert!(!is_email_regex("nada_neutho"));
}
#[test]
fn test_valid_register_username() {
assert!(is_valid_username("Hello_98"));
assert!(is_valid_username("ten"));
assert!(!is_valid_username("Hello-98"));
assert!(!is_valid_username("a"));
assert!(!is_valid_username(""));
}
#[test]
fn test_valid_community_name() {
assert!(is_valid_community_name("example"));
assert!(is_valid_community_name("example_community"));
assert!(!is_valid_community_name("Example"));
assert!(!is_valid_community_name("Ex"));
assert!(!is_valid_community_name(""));
}
#[test]
fn test_valid_post_title() {
assert!(is_valid_post_title("Post Title"));
assert!(is_valid_post_title(" POST TITLE 😃😃😃😃😃"));
assert!(!is_valid_post_title("\n \n \n \n ")); // tabs/spaces/newlines
}
#[test]
fn test_slur_filter() {
let test =
"coons test dindu ladyboy tranny retardeds. Capitalized Niggerz. This is a bunch of other safe text.";
let slur_free = "No slurs here";
assert_eq!(
remove_slurs(&test),
"*removed* test *removed* *removed* *removed* *removed*. Capitalized *removed*. This is a bunch of other safe text."
.to_string()
);
let has_slurs_vec = vec![
"Niggerz",
"coons",
"dindu",
"ladyboy",
"retardeds",
"tranny",
];
let has_slurs_err_str = "No slurs - Niggerz, coons, dindu, ladyboy, retardeds, tranny";
assert_eq!(slur_check(test), Err(has_slurs_vec));
assert_eq!(slur_check(slur_free), Ok(()));
if let Err(slur_vec) = slur_check(test) {
assert_eq!(&slurs_vec_to_str(slur_vec), has_slurs_err_str);
}
}
// These helped with testing
// #[test]
// fn test_send_email() {
// let result = send_email("not a subject", "test_email@gmail.com", "ur user", "<h1>HI there</h1>");
// assert!(result.is_ok());
// }
}
lazy_static! {
static ref EMAIL_REGEX: Regex = Regex::new(r"^[a-zA-Z0-9.!#$%&*+/=?^_`{|}~-]+@[a-zA-Z0-9-]+(?:\.[a-zA-Z0-9-]+)*$").unwrap();
static ref SLUR_REGEX: Regex = RegexBuilder::new(r"(fag(g|got|tard)?|maricos?|cock\s?sucker(s|ing)?|\bn(i|1)g(\b|g?(a|er)?(s|z)?)\b|dindu(s?)|mudslime?s?|kikes?|mongoloids?|towel\s*heads?|\bspi(c|k)s?\b|\bchinks?|niglets?|beaners?|\bnips?\b|\bcoons?\b|jungle\s*bunn(y|ies?)|jigg?aboo?s?|\bpakis?\b|rag\s*heads?|gooks?|cunts?|bitch(es|ing|y)?|puss(y|ies?)|twats?|feminazis?|whor(es?|ing)|\bslut(s|t?y)?|\btr(a|@)nn?(y|ies?)|ladyboy(s?)|\b(b|re|r)tard(ed)?s?)").case_insensitive(true).build().unwrap();
static ref USERNAME_MATCHES_REGEX: Regex = Regex::new(r"/u/[a-zA-Z][0-9a-zA-Z_]*").unwrap();
// TODO keep this old one, it didn't work with port well tho
// static ref MENTIONS_REGEX: Regex = Regex::new(r"@(?P<name>[\w.]+)@(?P<domain>[a-zA-Z0-9._-]+\.[a-zA-Z0-9_-]+)").unwrap();
static ref MENTIONS_REGEX: Regex = Regex::new(r"@(?P<name>[\w.]+)@(?P<domain>[a-zA-Z0-9._:-]+)").unwrap();
static ref VALID_USERNAME_REGEX: Regex = Regex::new(r"^[a-zA-Z0-9_]{3,20}$").unwrap();
static ref VALID_COMMUNITY_NAME_REGEX: Regex = Regex::new(r"^[a-z0-9_]{3,20}$").unwrap();
static ref VALID_POST_TITLE_REGEX: Regex = Regex::new(r".*\S.*").unwrap();
pub static ref WEBFINGER_COMMUNITY_REGEX: Regex = Regex::new(&format!(
"^group:([a-z0-9_]{{3, 20}})@{}$",
Settings::get().hostname
))
.unwrap();
pub static ref WEBFINGER_USER_REGEX: Regex = Regex::new(&format!(
"^acct:([a-z0-9_]{{3, 20}})@{}$",
Settings::get().hostname
))
.unwrap();
pub static ref CACHE_CONTROL_REGEX: Regex =
Regex::new("^((text|image)/.+|application/javascript)$").unwrap();
}
pub struct Keypair {
pub private_key: String,
pub public_key: String,
}
/// Generate the asymmetric keypair for ActivityPub HTTP signatures.
pub fn generate_actor_keypair() -> Result<Keypair, Error> {
let rsa = Rsa::generate(2048)?;
let pkey = PKey::from_rsa(rsa)?;
let public_key = pkey.public_key_to_pem()?;
let private_key = pkey.private_key_to_pem_pkcs8()?;
let key_to_string = |key| match String::from_utf8(key) {
Ok(s) => Ok(s),
Err(e) => Err(Error::new(
ErrorKind::Other,
format!("Failed converting key to string: {}", e),
)),
};
Ok(Keypair {
private_key: key_to_string(private_key)?,
public_key: key_to_string(public_key)?,
})
}
pub enum EndpointType {
Community,
User,
Post,
Comment,
PrivateMessage,
}
pub fn get_apub_protocol_string() -> &'static str {
if Settings::get().federation.tls_enabled {
"https"
} else {
"http"
}
}
/// Generates the ActivityPub ID for a given object type and ID.
pub fn make_apub_endpoint(endpoint_type: EndpointType, name: &str) -> Url {
let point = match endpoint_type {
EndpointType::Community => "c",
EndpointType::User => "u",
EndpointType::Post => "post",
EndpointType::Comment => "comment",
EndpointType::PrivateMessage => "private_message",
};
Url::parse(&format!(
"{}://{}/{}/{}",
get_apub_protocol_string(),
Settings::get().hostname,
point,
name
))
.unwrap()
}

View File

@ -1,112 +0,0 @@
-- Adds a newest_activity_time for the post_views, in order to sort by newest comment
drop view post_view;
drop view post_mview;
drop materialized view post_aggregates_mview;
drop view post_aggregates_view;
-- Drop the columns
alter table post drop column embed_title;
alter table post drop column embed_description;
alter table post drop column embed_html;
alter table post drop column thumbnail_url;
-- regen post view
create view post_aggregates_view as
select
p.*,
(select u.banned from user_ u where p.creator_id = u.id) as banned,
(select cb.id::bool from community_user_ban cb where p.creator_id = cb.user_id and p.community_id = cb.community_id) as banned_from_community,
(select name from user_ where p.creator_id = user_.id) as creator_name,
(select avatar from user_ where p.creator_id = user_.id) as creator_avatar,
(select name from community where p.community_id = community.id) as community_name,
(select removed from community c where p.community_id = c.id) as community_removed,
(select deleted from community c where p.community_id = c.id) as community_deleted,
(select nsfw from community c where p.community_id = c.id) as community_nsfw,
(select count(*) from comment where comment.post_id = p.id) as number_of_comments,
coalesce(sum(pl.score), 0) as score,
count (case when pl.score = 1 then 1 else null end) as upvotes,
count (case when pl.score = -1 then 1 else null end) as downvotes,
hot_rank(coalesce(sum(pl.score) , 0),
(
case when (p.published < ('now'::timestamp - '1 month'::interval)) then p.published -- Prevents necro-bumps
else greatest(c.recent_comment_time, p.published)
end
)
) as hot_rank,
(
case when (p.published < ('now'::timestamp - '1 month'::interval)) then p.published -- Prevents necro-bumps
else greatest(c.recent_comment_time, p.published)
end
) as newest_activity_time
from post p
left join post_like pl on p.id = pl.post_id
left join (
select post_id,
max(published) as recent_comment_time
from comment
group by 1
) c on p.id = c.post_id
group by p.id, c.recent_comment_time;
create materialized view post_aggregates_mview as select * from post_aggregates_view;
create unique index idx_post_aggregates_mview_id on post_aggregates_mview (id);
create view post_view as
with all_post as (
select
pa.*
from post_aggregates_view pa
)
select
ap.*,
u.id as user_id,
coalesce(pl.score, 0) as my_vote,
(select cf.id::bool from community_follower cf where u.id = cf.user_id and cf.community_id = ap.community_id) as subscribed,
(select pr.id::bool from post_read pr where u.id = pr.user_id and pr.post_id = ap.id) as read,
(select ps.id::bool from post_saved ps where u.id = ps.user_id and ps.post_id = ap.id) as saved
from user_ u
cross join all_post ap
left join post_like pl on u.id = pl.user_id and ap.id = pl.post_id
union all
select
ap.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from all_post ap
;
create view post_mview as
with all_post as (
select
pa.*
from post_aggregates_mview pa
)
select
ap.*,
u.id as user_id,
coalesce(pl.score, 0) as my_vote,
(select cf.id::bool from community_follower cf where u.id = cf.user_id and cf.community_id = ap.community_id) as subscribed,
(select pr.id::bool from post_read pr where u.id = pr.user_id and pr.post_id = ap.id) as read,
(select ps.id::bool from post_saved ps where u.id = ps.user_id and ps.post_id = ap.id) as saved
from user_ u
cross join all_post ap
left join post_like pl on u.id = pl.user_id and ap.id = pl.post_id
union all
select
ap.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from all_post ap
;

View File

@ -1,115 +0,0 @@
-- Add the columns
alter table post add column embed_title text;
alter table post add column embed_description text;
alter table post add column embed_html text;
alter table post add column thumbnail_url text;
-- Regenerate the views
-- Adds a newest_activity_time for the post_views, in order to sort by newest comment
drop view post_view;
drop view post_mview;
drop materialized view post_aggregates_mview;
drop view post_aggregates_view;
-- regen post view
create view post_aggregates_view as
select
p.*,
(select u.banned from user_ u where p.creator_id = u.id) as banned,
(select cb.id::bool from community_user_ban cb where p.creator_id = cb.user_id and p.community_id = cb.community_id) as banned_from_community,
(select name from user_ where p.creator_id = user_.id) as creator_name,
(select avatar from user_ where p.creator_id = user_.id) as creator_avatar,
(select name from community where p.community_id = community.id) as community_name,
(select removed from community c where p.community_id = c.id) as community_removed,
(select deleted from community c where p.community_id = c.id) as community_deleted,
(select nsfw from community c where p.community_id = c.id) as community_nsfw,
(select count(*) from comment where comment.post_id = p.id) as number_of_comments,
coalesce(sum(pl.score), 0) as score,
count (case when pl.score = 1 then 1 else null end) as upvotes,
count (case when pl.score = -1 then 1 else null end) as downvotes,
hot_rank(coalesce(sum(pl.score) , 0),
(
case when (p.published < ('now'::timestamp - '1 month'::interval)) then p.published -- Prevents necro-bumps
else greatest(c.recent_comment_time, p.published)
end
)
) as hot_rank,
(
case when (p.published < ('now'::timestamp - '1 month'::interval)) then p.published -- Prevents necro-bumps
else greatest(c.recent_comment_time, p.published)
end
) as newest_activity_time
from post p
left join post_like pl on p.id = pl.post_id
left join (
select post_id,
max(published) as recent_comment_time
from comment
group by 1
) c on p.id = c.post_id
group by p.id, c.recent_comment_time;
create materialized view post_aggregates_mview as select * from post_aggregates_view;
create unique index idx_post_aggregates_mview_id on post_aggregates_mview (id);
create view post_view as
with all_post as (
select
pa.*
from post_aggregates_view pa
)
select
ap.*,
u.id as user_id,
coalesce(pl.score, 0) as my_vote,
(select cf.id::bool from community_follower cf where u.id = cf.user_id and cf.community_id = ap.community_id) as subscribed,
(select pr.id::bool from post_read pr where u.id = pr.user_id and pr.post_id = ap.id) as read,
(select ps.id::bool from post_saved ps where u.id = ps.user_id and ps.post_id = ap.id) as saved
from user_ u
cross join all_post ap
left join post_like pl on u.id = pl.user_id and ap.id = pl.post_id
union all
select
ap.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from all_post ap
;
create view post_mview as
with all_post as (
select
pa.*
from post_aggregates_mview pa
)
select
ap.*,
u.id as user_id,
coalesce(pl.score, 0) as my_vote,
(select cf.id::bool from community_follower cf where u.id = cf.user_id and cf.community_id = ap.community_id) as subscribed,
(select pr.id::bool from post_read pr where u.id = pr.user_id and pr.post_id = ap.id) as read,
(select ps.id::bool from post_saved ps where u.id = ps.user_id and ps.post_id = ap.id) as saved
from user_ u
cross join all_post ap
left join post_like pl on u.id = pl.user_id and ap.id = pl.post_id
union all
select
ap.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from all_post ap
;

View File

@ -1,16 +0,0 @@
drop table activity;
alter table user_
drop column actor_id,
drop column private_key,
drop column public_key,
drop column bio,
drop column local,
drop column last_refreshed_at;
alter table community
drop column actor_id,
drop column private_key,
drop column public_key,
drop column local,
drop column last_refreshed_at;

View File

@ -1,36 +0,0 @@
-- The Activitypub activity table
-- All user actions must create a row here.
create table activity (
id serial primary key,
user_id int references user_ on update cascade on delete cascade not null, -- Ensures that the user is set up here.
data jsonb not null,
local boolean not null default true,
published timestamp not null default now(),
updated timestamp
);
-- Making sure that id is unique
create unique index idx_activity_unique_apid on activity ((data ->> 'id'::text));
-- Add federation columns to the two actor tables
alter table user_
-- TODO uniqueness constraints should be added on these 3 columns later
add column actor_id character varying(255) not null default 'http://fake.com', -- This needs to be checked and updated in code, building from the site url if local
add column bio text, -- not on community, already has description
add column local boolean not null default true,
add column private_key text, -- These need to be generated from code
add column public_key text,
add column last_refreshed_at timestamp not null default now() -- Used to re-fetch federated actor periodically
;
-- Community
alter table community
add column actor_id character varying(255) not null default 'http://fake.com', -- This needs to be checked and updated in code, building from the site url if local
add column local boolean not null default true,
add column private_key text, -- These need to be generated from code
add column public_key text,
add column last_refreshed_at timestamp not null default now() -- Used to re-fetch federated actor periodically
;
-- Don't worry about rebuilding the views right now.

View File

@ -1,7 +0,0 @@
alter table post
drop column ap_id,
drop column local;
alter table comment
drop column ap_id,
drop column local;

View File

@ -1,14 +0,0 @@
-- Add federation columns to post, comment
alter table post
-- TODO uniqueness constraints should be added on these 3 columns later
add column ap_id character varying(255) not null default 'http://fake.com', -- This needs to be checked and updated in code, building from the site url if local
add column local boolean not null default true
;
alter table comment
-- TODO uniqueness constraints should be added on these 3 columns later
add column ap_id character varying(255) not null default 'http://fake.com', -- This needs to be checked and updated in code, building from the site url if local
add column local boolean not null default true
;

View File

@ -1,36 +0,0 @@
-- User table
drop view user_view cascade;
alter table user_
add column fedi_name varchar(40) not null default 'http://fake.com';
alter table user_
add constraint user__name_fedi_name_key unique (name, fedi_name);
-- Community
alter table community
add constraint community_name_key unique (name);
create view user_view as
select
u.id,
u.name,
u.avatar,
u.email,
u.matrix_user_id,
u.fedi_name,
u.admin,
u.banned,
u.show_avatars,
u.send_notifications_to_email,
u.published,
(select count(*) from post p where p.creator_id = u.id) as number_of_posts,
(select coalesce(sum(score), 0) from post p, post_like pl where u.id = p.creator_id and p.id = pl.post_id) as post_score,
(select count(*) from comment c where c.creator_id = u.id) as number_of_comments,
(select coalesce(sum(score), 0) from comment c, comment_like cl where u.id = c.creator_id and c.id = cl.comment_id) as comment_score
from user_ u;
create materialized view user_mview as select * from user_view;
create unique index idx_user_mview_id on user_mview (id);

View File

@ -1,38 +0,0 @@
-- User table
-- Need to regenerate user_view, user_mview
drop view user_view cascade;
-- Remove the fedi_name constraint, drop that useless column
alter table user_
drop constraint user__name_fedi_name_key;
alter table user_
drop column fedi_name;
-- Community
alter table community
drop constraint community_name_key;
create view user_view as
select
u.id,
u.name,
u.avatar,
u.email,
u.matrix_user_id,
u.admin,
u.banned,
u.show_avatars,
u.send_notifications_to_email,
u.published,
(select count(*) from post p where p.creator_id = u.id) as number_of_posts,
(select coalesce(sum(score), 0) from post p, post_like pl where u.id = p.creator_id and p.id = pl.post_id) as post_score,
(select count(*) from comment c where c.creator_id = u.id) as number_of_comments,
(select coalesce(sum(score), 0) from comment c, comment_like cl where u.id = c.creator_id and c.id = cl.comment_id) as comment_score
from user_ u;
create materialized view user_mview as select * from user_view;
create unique index idx_user_mview_id on user_mview (id);

View File

@ -1,440 +0,0 @@
-- user_view
drop view user_view cascade;
create view user_view as
select
u.id,
u.name,
u.avatar,
u.email,
u.matrix_user_id,
u.admin,
u.banned,
u.show_avatars,
u.send_notifications_to_email,
u.published,
(select count(*) from post p where p.creator_id = u.id) as number_of_posts,
(select coalesce(sum(score), 0) from post p, post_like pl where u.id = p.creator_id and p.id = pl.post_id) as post_score,
(select count(*) from comment c where c.creator_id = u.id) as number_of_comments,
(select coalesce(sum(score), 0) from comment c, comment_like cl where u.id = c.creator_id and c.id = cl.comment_id) as comment_score
from user_ u;
create materialized view user_mview as select * from user_view;
create unique index idx_user_mview_id on user_mview (id);
-- community_view
drop view community_aggregates_view cascade;
create view community_aggregates_view as
select c.*,
(select name from user_ u where c.creator_id = u.id) as creator_name,
(select avatar from user_ u where c.creator_id = u.id) as creator_avatar,
(select name from category ct where c.category_id = ct.id) as category_name,
(select count(*) from community_follower cf where cf.community_id = c.id) as number_of_subscribers,
(select count(*) from post p where p.community_id = c.id) as number_of_posts,
(select count(*) from comment co, post p where c.id = p.community_id and p.id = co.post_id) as number_of_comments,
hot_rank((select count(*) from community_follower cf where cf.community_id = c.id), c.published) as hot_rank
from community c;
create materialized view community_aggregates_mview as select * from community_aggregates_view;
create unique index idx_community_aggregates_mview_id on community_aggregates_mview (id);
create view community_view as
with all_community as
(
select
ca.*
from community_aggregates_view ca
)
select
ac.*,
u.id as user_id,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.id = cf.community_id) as subscribed
from user_ u
cross join all_community ac
union all
select
ac.*,
null as user_id,
null as subscribed
from all_community ac
;
create view community_mview as
with all_community as
(
select
ca.*
from community_aggregates_mview ca
)
select
ac.*,
u.id as user_id,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.id = cf.community_id) as subscribed
from user_ u
cross join all_community ac
union all
select
ac.*,
null as user_id,
null as subscribed
from all_community ac
;
-- community views
drop view community_moderator_view;
drop view community_follower_view;
drop view community_user_ban_view;
create view community_moderator_view as
select *,
(select name from user_ u where cm.user_id = u.id) as user_name,
(select avatar from user_ u where cm.user_id = u.id),
(select name from community c where cm.community_id = c.id) as community_name
from community_moderator cm;
create view community_follower_view as
select *,
(select name from user_ u where cf.user_id = u.id) as user_name,
(select avatar from user_ u where cf.user_id = u.id),
(select name from community c where cf.community_id = c.id) as community_name
from community_follower cf;
create view community_user_ban_view as
select *,
(select name from user_ u where cm.user_id = u.id) as user_name,
(select avatar from user_ u where cm.user_id = u.id),
(select name from community c where cm.community_id = c.id) as community_name
from community_user_ban cm;
-- post_view
drop view post_view;
drop view post_mview;
drop materialized view post_aggregates_mview;
drop view post_aggregates_view;
-- regen post view
create view post_aggregates_view as
select
p.*,
(select u.banned from user_ u where p.creator_id = u.id) as banned,
(select cb.id::bool from community_user_ban cb where p.creator_id = cb.user_id and p.community_id = cb.community_id) as banned_from_community,
(select name from user_ where p.creator_id = user_.id) as creator_name,
(select avatar from user_ where p.creator_id = user_.id) as creator_avatar,
(select name from community where p.community_id = community.id) as community_name,
(select removed from community c where p.community_id = c.id) as community_removed,
(select deleted from community c where p.community_id = c.id) as community_deleted,
(select nsfw from community c where p.community_id = c.id) as community_nsfw,
(select count(*) from comment where comment.post_id = p.id) as number_of_comments,
coalesce(sum(pl.score), 0) as score,
count (case when pl.score = 1 then 1 else null end) as upvotes,
count (case when pl.score = -1 then 1 else null end) as downvotes,
hot_rank(coalesce(sum(pl.score) , 0),
(
case when (p.published < ('now'::timestamp - '1 month'::interval)) then p.published -- Prevents necro-bumps
else greatest(c.recent_comment_time, p.published)
end
)
) as hot_rank,
(
case when (p.published < ('now'::timestamp - '1 month'::interval)) then p.published -- Prevents necro-bumps
else greatest(c.recent_comment_time, p.published)
end
) as newest_activity_time
from post p
left join post_like pl on p.id = pl.post_id
left join (
select post_id,
max(published) as recent_comment_time
from comment
group by 1
) c on p.id = c.post_id
group by p.id, c.recent_comment_time;
create materialized view post_aggregates_mview as select * from post_aggregates_view;
create unique index idx_post_aggregates_mview_id on post_aggregates_mview (id);
create view post_view as
with all_post as (
select
pa.*
from post_aggregates_view pa
)
select
ap.*,
u.id as user_id,
coalesce(pl.score, 0) as my_vote,
(select cf.id::bool from community_follower cf where u.id = cf.user_id and cf.community_id = ap.community_id) as subscribed,
(select pr.id::bool from post_read pr where u.id = pr.user_id and pr.post_id = ap.id) as read,
(select ps.id::bool from post_saved ps where u.id = ps.user_id and ps.post_id = ap.id) as saved
from user_ u
cross join all_post ap
left join post_like pl on u.id = pl.user_id and ap.id = pl.post_id
union all
select
ap.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from all_post ap
;
create view post_mview as
with all_post as (
select
pa.*
from post_aggregates_mview pa
)
select
ap.*,
u.id as user_id,
coalesce(pl.score, 0) as my_vote,
(select cf.id::bool from community_follower cf where u.id = cf.user_id and cf.community_id = ap.community_id) as subscribed,
(select pr.id::bool from post_read pr where u.id = pr.user_id and pr.post_id = ap.id) as read,
(select ps.id::bool from post_saved ps where u.id = ps.user_id and ps.post_id = ap.id) as saved
from user_ u
cross join all_post ap
left join post_like pl on u.id = pl.user_id and ap.id = pl.post_id
union all
select
ap.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from all_post ap
;
-- reply_view, comment_view, user_mention
drop view reply_view;
drop view user_mention_view;
drop view user_mention_mview;
drop view comment_view;
drop view comment_mview;
drop materialized view comment_aggregates_mview;
drop view comment_aggregates_view;
-- reply and comment view
create view comment_aggregates_view as
select
c.*,
(select community_id from post p where p.id = c.post_id),
(select co.name from post p, community co where p.id = c.post_id and p.community_id = co.id) as community_name,
(select u.banned from user_ u where c.creator_id = u.id) as banned,
(select cb.id::bool from community_user_ban cb, post p where c.creator_id = cb.user_id and p.id = c.post_id and p.community_id = cb.community_id) as banned_from_community,
(select name from user_ where c.creator_id = user_.id) as creator_name,
(select avatar from user_ where c.creator_id = user_.id) as creator_avatar,
coalesce(sum(cl.score), 0) as score,
count (case when cl.score = 1 then 1 else null end) as upvotes,
count (case when cl.score = -1 then 1 else null end) as downvotes,
hot_rank(coalesce(sum(cl.score) , 0), c.published) as hot_rank
from comment c
left join comment_like cl on c.id = cl.comment_id
group by c.id;
create materialized view comment_aggregates_mview as select * from comment_aggregates_view;
create unique index idx_comment_aggregates_mview_id on comment_aggregates_mview (id);
create view comment_view as
with all_comment as
(
select
ca.*
from comment_aggregates_view ca
)
select
ac.*,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.community_id = cf.community_id) as subscribed,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved
from user_ u
cross join all_comment ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
union all
select
ac.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from all_comment ac
;
create view comment_mview as
with all_comment as
(
select
ca.*
from comment_aggregates_mview ca
)
select
ac.*,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.community_id = cf.community_id) as subscribed,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved
from user_ u
cross join all_comment ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
union all
select
ac.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from all_comment ac
;
-- Do the reply_view referencing the comment_mview
create view reply_view as
with closereply as (
select
c2.id,
c2.creator_id as sender_id,
c.creator_id as recipient_id
from comment c
inner join comment c2 on c.id = c2.parent_id
where c2.creator_id != c.creator_id
-- Do union where post is null
union
select
c.id,
c.creator_id as sender_id,
p.creator_id as recipient_id
from comment c, post p
where c.post_id = p.id and c.parent_id is null and c.creator_id != p.creator_id
)
select cv.*,
closereply.recipient_id
from comment_mview cv, closereply
where closereply.id = cv.id
;
-- user mention
create view user_mention_view as
select
c.id,
um.id as user_mention_id,
c.creator_id,
c.post_id,
c.parent_id,
c.content,
c.removed,
um.read,
c.published,
c.updated,
c.deleted,
c.community_id,
c.community_name,
c.banned,
c.banned_from_community,
c.creator_name,
c.creator_avatar,
c.score,
c.upvotes,
c.downvotes,
c.hot_rank,
c.user_id,
c.my_vote,
c.saved,
um.recipient_id
from user_mention um, comment_view c
where um.comment_id = c.id;
create view user_mention_mview as
with all_comment as
(
select
ca.*
from comment_aggregates_mview ca
)
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved,
um.recipient_id
from user_ u
cross join all_comment ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
left join user_mention um on um.comment_id = ac.id
union all
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
null as user_id,
null as my_vote,
null as saved,
um.recipient_id
from all_comment ac
left join user_mention um on um.comment_id = ac.id
;

View File

@ -1,497 +0,0 @@
-- user_view
drop view user_view cascade;
create view user_view as
select
u.id,
u.actor_id,
u.name,
u.avatar,
u.email,
u.matrix_user_id,
u.bio,
u.local,
u.admin,
u.banned,
u.show_avatars,
u.send_notifications_to_email,
u.published,
(select count(*) from post p where p.creator_id = u.id) as number_of_posts,
(select coalesce(sum(score), 0) from post p, post_like pl where u.id = p.creator_id and p.id = pl.post_id) as post_score,
(select count(*) from comment c where c.creator_id = u.id) as number_of_comments,
(select coalesce(sum(score), 0) from comment c, comment_like cl where u.id = c.creator_id and c.id = cl.comment_id) as comment_score
from user_ u;
create materialized view user_mview as select * from user_view;
create unique index idx_user_mview_id on user_mview (id);
-- community_view
drop view community_aggregates_view cascade;
create view community_aggregates_view as
-- Now that there's public and private keys, you have to be explicit here
select c.id,
c.name,
c.title,
c.description,
c.category_id,
c.creator_id,
c.removed,
c.published,
c.updated,
c.deleted,
c.nsfw,
c.actor_id,
c.local,
c.last_refreshed_at,
(select actor_id from user_ u where c.creator_id = u.id) as creator_actor_id,
(select local from user_ u where c.creator_id = u.id) as creator_local,
(select name from user_ u where c.creator_id = u.id) as creator_name,
(select avatar from user_ u where c.creator_id = u.id) as creator_avatar,
(select name from category ct where c.category_id = ct.id) as category_name,
(select count(*) from community_follower cf where cf.community_id = c.id) as number_of_subscribers,
(select count(*) from post p where p.community_id = c.id) as number_of_posts,
(select count(*) from comment co, post p where c.id = p.community_id and p.id = co.post_id) as number_of_comments,
hot_rank((select count(*) from community_follower cf where cf.community_id = c.id), c.published) as hot_rank
from community c;
create materialized view community_aggregates_mview as select * from community_aggregates_view;
create unique index idx_community_aggregates_mview_id on community_aggregates_mview (id);
create view community_view as
with all_community as
(
select
ca.*
from community_aggregates_view ca
)
select
ac.*,
u.id as user_id,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.id = cf.community_id) as subscribed
from user_ u
cross join all_community ac
union all
select
ac.*,
null as user_id,
null as subscribed
from all_community ac
;
create view community_mview as
with all_community as
(
select
ca.*
from community_aggregates_mview ca
)
select
ac.*,
u.id as user_id,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.id = cf.community_id) as subscribed
from user_ u
cross join all_community ac
union all
select
ac.*,
null as user_id,
null as subscribed
from all_community ac
;
-- community views
drop view community_moderator_view;
drop view community_follower_view;
drop view community_user_ban_view;
create view community_moderator_view as
select *,
(select actor_id from user_ u where cm.user_id = u.id) as user_actor_id,
(select local from user_ u where cm.user_id = u.id) as user_local,
(select name from user_ u where cm.user_id = u.id) as user_name,
(select avatar from user_ u where cm.user_id = u.id),
(select actor_id from community c where cm.community_id = c.id) as community_actor_id,
(select local from community c where cm.community_id = c.id) as community_local,
(select name from community c where cm.community_id = c.id) as community_name
from community_moderator cm;
create view community_follower_view as
select *,
(select actor_id from user_ u where cf.user_id = u.id) as user_actor_id,
(select local from user_ u where cf.user_id = u.id) as user_local,
(select name from user_ u where cf.user_id = u.id) as user_name,
(select avatar from user_ u where cf.user_id = u.id),
(select actor_id from community c where cf.community_id = c.id) as community_actor_id,
(select local from community c where cf.community_id = c.id) as community_local,
(select name from community c where cf.community_id = c.id) as community_name
from community_follower cf;
create view community_user_ban_view as
select *,
(select actor_id from user_ u where cm.user_id = u.id) as user_actor_id,
(select local from user_ u where cm.user_id = u.id) as user_local,
(select name from user_ u where cm.user_id = u.id) as user_name,
(select avatar from user_ u where cm.user_id = u.id),
(select actor_id from community c where cm.community_id = c.id) as community_actor_id,
(select local from community c where cm.community_id = c.id) as community_local,
(select name from community c where cm.community_id = c.id) as community_name
from community_user_ban cm;
-- post_view
drop view post_view;
drop view post_mview;
drop materialized view post_aggregates_mview;
drop view post_aggregates_view;
-- regen post view
create view post_aggregates_view as
select
p.*,
(select u.banned from user_ u where p.creator_id = u.id) as banned,
(select cb.id::bool from community_user_ban cb where p.creator_id = cb.user_id and p.community_id = cb.community_id) as banned_from_community,
(select actor_id from user_ where p.creator_id = user_.id) as creator_actor_id,
(select local from user_ where p.creator_id = user_.id) as creator_local,
(select name from user_ where p.creator_id = user_.id) as creator_name,
(select avatar from user_ where p.creator_id = user_.id) as creator_avatar,
(select actor_id from community where p.community_id = community.id) as community_actor_id,
(select local from community where p.community_id = community.id) as community_local,
(select name from community where p.community_id = community.id) as community_name,
(select removed from community c where p.community_id = c.id) as community_removed,
(select deleted from community c where p.community_id = c.id) as community_deleted,
(select nsfw from community c where p.community_id = c.id) as community_nsfw,
(select count(*) from comment where comment.post_id = p.id) as number_of_comments,
coalesce(sum(pl.score), 0) as score,
count (case when pl.score = 1 then 1 else null end) as upvotes,
count (case when pl.score = -1 then 1 else null end) as downvotes,
hot_rank(coalesce(sum(pl.score) , 0),
(
case when (p.published < ('now'::timestamp - '1 month'::interval)) then p.published -- Prevents necro-bumps
else greatest(c.recent_comment_time, p.published)
end
)
) as hot_rank,
(
case when (p.published < ('now'::timestamp - '1 month'::interval)) then p.published -- Prevents necro-bumps
else greatest(c.recent_comment_time, p.published)
end
) as newest_activity_time
from post p
left join post_like pl on p.id = pl.post_id
left join (
select post_id,
max(published) as recent_comment_time
from comment
group by 1
) c on p.id = c.post_id
group by p.id, c.recent_comment_time;
create materialized view post_aggregates_mview as select * from post_aggregates_view;
create unique index idx_post_aggregates_mview_id on post_aggregates_mview (id);
create view post_view as
with all_post as (
select
pa.*
from post_aggregates_view pa
)
select
ap.*,
u.id as user_id,
coalesce(pl.score, 0) as my_vote,
(select cf.id::bool from community_follower cf where u.id = cf.user_id and cf.community_id = ap.community_id) as subscribed,
(select pr.id::bool from post_read pr where u.id = pr.user_id and pr.post_id = ap.id) as read,
(select ps.id::bool from post_saved ps where u.id = ps.user_id and ps.post_id = ap.id) as saved
from user_ u
cross join all_post ap
left join post_like pl on u.id = pl.user_id and ap.id = pl.post_id
union all
select
ap.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from all_post ap
;
create view post_mview as
with all_post as (
select
pa.*
from post_aggregates_mview pa
)
select
ap.*,
u.id as user_id,
coalesce(pl.score, 0) as my_vote,
(select cf.id::bool from community_follower cf where u.id = cf.user_id and cf.community_id = ap.community_id) as subscribed,
(select pr.id::bool from post_read pr where u.id = pr.user_id and pr.post_id = ap.id) as read,
(select ps.id::bool from post_saved ps where u.id = ps.user_id and ps.post_id = ap.id) as saved
from user_ u
cross join all_post ap
left join post_like pl on u.id = pl.user_id and ap.id = pl.post_id
union all
select
ap.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from all_post ap
;
-- reply_view, comment_view, user_mention
drop view reply_view;
drop view user_mention_view;
drop view user_mention_mview;
drop view comment_view;
drop view comment_mview;
drop materialized view comment_aggregates_mview;
drop view comment_aggregates_view;
-- reply and comment view
create view comment_aggregates_view as
select
c.*,
(select community_id from post p where p.id = c.post_id),
(select co.actor_id from post p, community co where p.id = c.post_id and p.community_id = co.id) as community_actor_id,
(select co.local from post p, community co where p.id = c.post_id and p.community_id = co.id) as community_local,
(select co.name from post p, community co where p.id = c.post_id and p.community_id = co.id) as community_name,
(select u.banned from user_ u where c.creator_id = u.id) as banned,
(select cb.id::bool from community_user_ban cb, post p where c.creator_id = cb.user_id and p.id = c.post_id and p.community_id = cb.community_id) as banned_from_community,
(select actor_id from user_ where c.creator_id = user_.id) as creator_actor_id,
(select local from user_ where c.creator_id = user_.id) as creator_local,
(select name from user_ where c.creator_id = user_.id) as creator_name,
(select avatar from user_ where c.creator_id = user_.id) as creator_avatar,
coalesce(sum(cl.score), 0) as score,
count (case when cl.score = 1 then 1 else null end) as upvotes,
count (case when cl.score = -1 then 1 else null end) as downvotes,
hot_rank(coalesce(sum(cl.score) , 0), c.published) as hot_rank
from comment c
left join comment_like cl on c.id = cl.comment_id
group by c.id;
create materialized view comment_aggregates_mview as select * from comment_aggregates_view;
create unique index idx_comment_aggregates_mview_id on comment_aggregates_mview (id);
create view comment_view as
with all_comment as
(
select
ca.*
from comment_aggregates_view ca
)
select
ac.*,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.community_id = cf.community_id) as subscribed,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved
from user_ u
cross join all_comment ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
union all
select
ac.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from all_comment ac
;
create view comment_mview as
with all_comment as
(
select
ca.*
from comment_aggregates_mview ca
)
select
ac.*,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.community_id = cf.community_id) as subscribed,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved
from user_ u
cross join all_comment ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
union all
select
ac.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from all_comment ac
;
-- Do the reply_view referencing the comment_mview
create view reply_view as
with closereply as (
select
c2.id,
c2.creator_id as sender_id,
c.creator_id as recipient_id
from comment c
inner join comment c2 on c.id = c2.parent_id
where c2.creator_id != c.creator_id
-- Do union where post is null
union
select
c.id,
c.creator_id as sender_id,
p.creator_id as recipient_id
from comment c, post p
where c.post_id = p.id and c.parent_id is null and c.creator_id != p.creator_id
)
select cv.*,
closereply.recipient_id
from comment_mview cv, closereply
where closereply.id = cv.id
;
-- user mention
create view user_mention_view as
select
c.id,
um.id as user_mention_id,
c.creator_id,
c.creator_actor_id,
c.creator_local,
c.post_id,
c.parent_id,
c.content,
c.removed,
um.read,
c.published,
c.updated,
c.deleted,
c.community_id,
c.community_actor_id,
c.community_local,
c.community_name,
c.banned,
c.banned_from_community,
c.creator_name,
c.creator_avatar,
c.score,
c.upvotes,
c.downvotes,
c.hot_rank,
c.user_id,
c.my_vote,
c.saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_mention um, comment_view c
where um.comment_id = c.id;
create view user_mention_mview as
with all_comment as
(
select
ca.*
from comment_aggregates_mview ca
)
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_ u
cross join all_comment ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
left join user_mention um on um.comment_id = ac.id
union all
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
null as user_id,
null as my_vote,
null as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from all_comment ac
left join user_mention um on um.comment_id = ac.id
;

View File

@ -1,4 +0,0 @@
-- The username index
drop index idx_user_name_lower_actor_id;
create unique index idx_user_name_lower on user_ (lower(name));

View File

@ -1,2 +0,0 @@
drop index idx_user_name_lower;
create unique index idx_user_name_lower_actor_id on user_ (lower(name), lower(actor_id));

View File

@ -1,21 +0,0 @@
drop materialized view private_message_mview;
drop view private_message_view;
alter table private_message
drop column ap_id,
drop column local;
create view private_message_view as
select
pm.*,
u.name as creator_name,
u.avatar as creator_avatar,
u2.name as recipient_name,
u2.avatar as recipient_avatar
from private_message pm
inner join user_ u on u.id = pm.creator_id
inner join user_ u2 on u2.id = pm.recipient_id;
create materialized view private_message_mview as select * from private_message_view;
create unique index idx_private_message_mview_id on private_message_mview (id);

View File

@ -1,25 +0,0 @@
alter table private_message
add column ap_id character varying(255) not null default 'http://fake.com', -- This needs to be checked and updated in code, building from the site url if local
add column local boolean not null default true
;
drop materialized view private_message_mview;
drop view private_message_view;
create view private_message_view as
select
pm.*,
u.name as creator_name,
u.avatar as creator_avatar,
u.actor_id as creator_actor_id,
u.local as creator_local,
u2.name as recipient_name,
u2.avatar as recipient_avatar,
u2.actor_id as recipient_actor_id,
u2.local as recipient_local
from private_message pm
inner join user_ u on u.id = pm.creator_id
inner join user_ u2 on u2.id = pm.recipient_id;
create materialized view private_message_mview as select * from private_message_view;
create unique index idx_private_message_mview_id on private_message_mview (id);

View File

@ -1,535 +0,0 @@
-- Dropping all the fast tables
drop table user_fast;
drop view post_fast_view;
drop table post_aggregates_fast;
drop view community_fast_view;
drop table community_aggregates_fast;
drop view reply_fast_view;
drop view user_mention_fast_view;
drop view comment_fast_view;
drop table comment_aggregates_fast;
-- Re-adding all the triggers, functions, and mviews
-- private message
create materialized view private_message_mview as select * from private_message_view;
create unique index idx_private_message_mview_id on private_message_mview (id);
-- Create the triggers
create or replace function refresh_private_message()
returns trigger language plpgsql
as $$
begin
refresh materialized view concurrently private_message_mview;
return null;
end $$;
create trigger refresh_private_message
after insert or update or delete or truncate
on private_message
for each statement
execute procedure refresh_private_message();
-- user
create or replace function refresh_user()
returns trigger language plpgsql
as $$
begin
refresh materialized view concurrently user_mview;
refresh materialized view concurrently comment_aggregates_mview; -- cause of bans
refresh materialized view concurrently post_aggregates_mview;
return null;
end $$;
drop trigger refresh_user on user_;
create trigger refresh_user
after insert or update or delete or truncate
on user_
for each statement
execute procedure refresh_user();
drop view user_view cascade;
create view user_view as
select
u.id,
u.actor_id,
u.name,
u.avatar,
u.email,
u.matrix_user_id,
u.bio,
u.local,
u.admin,
u.banned,
u.show_avatars,
u.send_notifications_to_email,
u.published,
(select count(*) from post p where p.creator_id = u.id) as number_of_posts,
(select coalesce(sum(score), 0) from post p, post_like pl where u.id = p.creator_id and p.id = pl.post_id) as post_score,
(select count(*) from comment c where c.creator_id = u.id) as number_of_comments,
(select coalesce(sum(score), 0) from comment c, comment_like cl where u.id = c.creator_id and c.id = cl.comment_id) as comment_score
from user_ u;
create materialized view user_mview as select * from user_view;
create unique index idx_user_mview_id on user_mview (id);
-- community
drop trigger refresh_community on community;
create trigger refresh_community
after insert or update or delete or truncate
on community
for each statement
execute procedure refresh_community();
create or replace function refresh_community()
returns trigger language plpgsql
as $$
begin
refresh materialized view concurrently post_aggregates_mview;
refresh materialized view concurrently community_aggregates_mview;
refresh materialized view concurrently user_mview;
return null;
end $$;
drop view community_aggregates_view cascade;
create view community_aggregates_view as
-- Now that there's public and private keys, you have to be explicit here
select c.id,
c.name,
c.title,
c.description,
c.category_id,
c.creator_id,
c.removed,
c.published,
c.updated,
c.deleted,
c.nsfw,
c.actor_id,
c.local,
c.last_refreshed_at,
(select actor_id from user_ u where c.creator_id = u.id) as creator_actor_id,
(select local from user_ u where c.creator_id = u.id) as creator_local,
(select name from user_ u where c.creator_id = u.id) as creator_name,
(select avatar from user_ u where c.creator_id = u.id) as creator_avatar,
(select name from category ct where c.category_id = ct.id) as category_name,
(select count(*) from community_follower cf where cf.community_id = c.id) as number_of_subscribers,
(select count(*) from post p where p.community_id = c.id) as number_of_posts,
(select count(*) from comment co, post p where c.id = p.community_id and p.id = co.post_id) as number_of_comments,
hot_rank((select count(*) from community_follower cf where cf.community_id = c.id), c.published) as hot_rank
from community c;
create materialized view community_aggregates_mview as select * from community_aggregates_view;
create unique index idx_community_aggregates_mview_id on community_aggregates_mview (id);
create view community_view as
with all_community as
(
select
ca.*
from community_aggregates_view ca
)
select
ac.*,
u.id as user_id,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.id = cf.community_id) as subscribed
from user_ u
cross join all_community ac
union all
select
ac.*,
null as user_id,
null as subscribed
from all_community ac
;
create view community_mview as
with all_community as
(
select
ca.*
from community_aggregates_mview ca
)
select
ac.*,
u.id as user_id,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.id = cf.community_id) as subscribed
from user_ u
cross join all_community ac
union all
select
ac.*,
null as user_id,
null as subscribed
from all_community ac
;
-- Post
drop view post_view;
drop view post_aggregates_view;
-- regen post view
create view post_aggregates_view as
select
p.*,
(select u.banned from user_ u where p.creator_id = u.id) as banned,
(select cb.id::bool from community_user_ban cb where p.creator_id = cb.user_id and p.community_id = cb.community_id) as banned_from_community,
(select actor_id from user_ where p.creator_id = user_.id) as creator_actor_id,
(select local from user_ where p.creator_id = user_.id) as creator_local,
(select name from user_ where p.creator_id = user_.id) as creator_name,
(select avatar from user_ where p.creator_id = user_.id) as creator_avatar,
(select actor_id from community where p.community_id = community.id) as community_actor_id,
(select local from community where p.community_id = community.id) as community_local,
(select name from community where p.community_id = community.id) as community_name,
(select removed from community c where p.community_id = c.id) as community_removed,
(select deleted from community c where p.community_id = c.id) as community_deleted,
(select nsfw from community c where p.community_id = c.id) as community_nsfw,
(select count(*) from comment where comment.post_id = p.id) as number_of_comments,
coalesce(sum(pl.score), 0) as score,
count (case when pl.score = 1 then 1 else null end) as upvotes,
count (case when pl.score = -1 then 1 else null end) as downvotes,
hot_rank(coalesce(sum(pl.score) , 0),
(
case when (p.published < ('now'::timestamp - '1 month'::interval)) then p.published -- Prevents necro-bumps
else greatest(c.recent_comment_time, p.published)
end
)
) as hot_rank,
(
case when (p.published < ('now'::timestamp - '1 month'::interval)) then p.published -- Prevents necro-bumps
else greatest(c.recent_comment_time, p.published)
end
) as newest_activity_time
from post p
left join post_like pl on p.id = pl.post_id
left join (
select post_id,
max(published) as recent_comment_time
from comment
group by 1
) c on p.id = c.post_id
group by p.id, c.recent_comment_time;
create materialized view post_aggregates_mview as select * from post_aggregates_view;
create unique index idx_post_aggregates_mview_id on post_aggregates_mview (id);
create view post_view as
with all_post as (
select
pa.*
from post_aggregates_view pa
)
select
ap.*,
u.id as user_id,
coalesce(pl.score, 0) as my_vote,
(select cf.id::bool from community_follower cf where u.id = cf.user_id and cf.community_id = ap.community_id) as subscribed,
(select pr.id::bool from post_read pr where u.id = pr.user_id and pr.post_id = ap.id) as read,
(select ps.id::bool from post_saved ps where u.id = ps.user_id and ps.post_id = ap.id) as saved
from user_ u
cross join all_post ap
left join post_like pl on u.id = pl.user_id and ap.id = pl.post_id
union all
select
ap.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from all_post ap
;
create view post_mview as
with all_post as (
select
pa.*
from post_aggregates_mview pa
)
select
ap.*,
u.id as user_id,
coalesce(pl.score, 0) as my_vote,
(select cf.id::bool from community_follower cf where u.id = cf.user_id and cf.community_id = ap.community_id) as subscribed,
(select pr.id::bool from post_read pr where u.id = pr.user_id and pr.post_id = ap.id) as read,
(select ps.id::bool from post_saved ps where u.id = ps.user_id and ps.post_id = ap.id) as saved
from user_ u
cross join all_post ap
left join post_like pl on u.id = pl.user_id and ap.id = pl.post_id
union all
select
ap.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from all_post ap
;
drop trigger refresh_post on post;
create trigger refresh_post
after insert or update or delete or truncate
on post
for each statement
execute procedure refresh_post();
create or replace function refresh_post()
returns trigger language plpgsql
as $$
begin
refresh materialized view concurrently post_aggregates_mview;
refresh materialized view concurrently user_mview;
return null;
end $$;
-- User mention, comment, reply
drop view user_mention_view;
drop view comment_view;
drop view comment_aggregates_view;
-- reply and comment view
create view comment_aggregates_view as
select
c.*,
(select community_id from post p where p.id = c.post_id),
(select co.actor_id from post p, community co where p.id = c.post_id and p.community_id = co.id) as community_actor_id,
(select co.local from post p, community co where p.id = c.post_id and p.community_id = co.id) as community_local,
(select co.name from post p, community co where p.id = c.post_id and p.community_id = co.id) as community_name,
(select u.banned from user_ u where c.creator_id = u.id) as banned,
(select cb.id::bool from community_user_ban cb, post p where c.creator_id = cb.user_id and p.id = c.post_id and p.community_id = cb.community_id) as banned_from_community,
(select actor_id from user_ where c.creator_id = user_.id) as creator_actor_id,
(select local from user_ where c.creator_id = user_.id) as creator_local,
(select name from user_ where c.creator_id = user_.id) as creator_name,
(select avatar from user_ where c.creator_id = user_.id) as creator_avatar,
coalesce(sum(cl.score), 0) as score,
count (case when cl.score = 1 then 1 else null end) as upvotes,
count (case when cl.score = -1 then 1 else null end) as downvotes,
hot_rank(coalesce(sum(cl.score) , 0), c.published) as hot_rank
from comment c
left join comment_like cl on c.id = cl.comment_id
group by c.id;
create materialized view comment_aggregates_mview as select * from comment_aggregates_view;
create unique index idx_comment_aggregates_mview_id on comment_aggregates_mview (id);
create view comment_view as
with all_comment as
(
select
ca.*
from comment_aggregates_view ca
)
select
ac.*,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.community_id = cf.community_id) as subscribed,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved
from user_ u
cross join all_comment ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
union all
select
ac.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from all_comment ac
;
create view comment_mview as
with all_comment as
(
select
ca.*
from comment_aggregates_mview ca
)
select
ac.*,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.community_id = cf.community_id) as subscribed,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved
from user_ u
cross join all_comment ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
union all
select
ac.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from all_comment ac
;
-- Do the reply_view referencing the comment_mview
create view reply_view as
with closereply as (
select
c2.id,
c2.creator_id as sender_id,
c.creator_id as recipient_id
from comment c
inner join comment c2 on c.id = c2.parent_id
where c2.creator_id != c.creator_id
-- Do union where post is null
union
select
c.id,
c.creator_id as sender_id,
p.creator_id as recipient_id
from comment c, post p
where c.post_id = p.id and c.parent_id is null and c.creator_id != p.creator_id
)
select cv.*,
closereply.recipient_id
from comment_mview cv, closereply
where closereply.id = cv.id
;
-- user mention
create view user_mention_view as
select
c.id,
um.id as user_mention_id,
c.creator_id,
c.creator_actor_id,
c.creator_local,
c.post_id,
c.parent_id,
c.content,
c.removed,
um.read,
c.published,
c.updated,
c.deleted,
c.community_id,
c.community_actor_id,
c.community_local,
c.community_name,
c.banned,
c.banned_from_community,
c.creator_name,
c.creator_avatar,
c.score,
c.upvotes,
c.downvotes,
c.hot_rank,
c.user_id,
c.my_vote,
c.saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_mention um, comment_view c
where um.comment_id = c.id;
create view user_mention_mview as
with all_comment as
(
select
ca.*
from comment_aggregates_mview ca
)
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_ u
cross join all_comment ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
left join user_mention um on um.comment_id = ac.id
union all
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
null as user_id,
null as my_vote,
null as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from all_comment ac
left join user_mention um on um.comment_id = ac.id
;

View File

@ -1,939 +0,0 @@
-- Drop the mviews
drop view post_mview;
drop materialized view user_mview;
drop view community_mview;
drop materialized view private_message_mview;
drop view user_mention_mview;
drop view reply_view;
drop view comment_mview;
drop materialized view post_aggregates_mview;
drop materialized view community_aggregates_mview;
drop materialized view comment_aggregates_mview;
drop trigger refresh_private_message on private_message;
-- User
drop view user_view;
create view user_view as
select
u.id,
u.actor_id,
u.name,
u.avatar,
u.email,
u.matrix_user_id,
u.bio,
u.local,
u.admin,
u.banned,
u.show_avatars,
u.send_notifications_to_email,
u.published,
coalesce(pd.posts, 0) as number_of_posts,
coalesce(pd.score, 0) as post_score,
coalesce(cd.comments, 0) as number_of_comments,
coalesce(cd.score, 0) as comment_score
from user_ u
left join (
select
p.creator_id as creator_id,
count(distinct p.id) as posts,
sum(pl.score) as score
from post p
join post_like pl on p.id = pl.post_id
group by p.creator_id
) pd on u.id = pd.creator_id
left join (
select
c.creator_id,
count(distinct c.id) as comments,
sum(cl.score) as score
from comment c
join comment_like cl on c.id = cl.comment_id
group by c.creator_id
) cd on u.id = cd.creator_id;
create table user_fast as select * from user_view;
alter table user_fast add primary key (id);
drop trigger refresh_user on user_;
create trigger refresh_user
after insert or update or delete
on user_
for each row
execute procedure refresh_user();
-- Sample insert
-- insert into user_(name, password_encrypted) values ('test_name', 'bleh');
-- Sample delete
-- delete from user_ where name like 'test_name';
-- Sample update
-- update user_ set avatar = 'hai' where name like 'test_name';
create or replace function refresh_user()
returns trigger language plpgsql
as $$
begin
IF (TG_OP = 'DELETE') THEN
delete from user_fast where id = OLD.id;
ELSIF (TG_OP = 'UPDATE') THEN
delete from user_fast where id = OLD.id;
insert into user_fast select * from user_view where id = NEW.id;
-- Refresh post_fast, cause of user info changes
delete from post_aggregates_fast where creator_id = NEW.id;
insert into post_aggregates_fast select * from post_aggregates_view where creator_id = NEW.id;
delete from comment_aggregates_fast where creator_id = NEW.id;
insert into comment_aggregates_fast select * from comment_aggregates_view where creator_id = NEW.id;
ELSIF (TG_OP = 'INSERT') THEN
insert into user_fast select * from user_view where id = NEW.id;
END IF;
return null;
end $$;
-- Post
-- Redoing the views : Credit eiknat
drop view post_view;
drop view post_aggregates_view;
create view post_aggregates_view as
select
p.*,
-- creator details
u.actor_id as creator_actor_id,
u."local" as creator_local,
u."name" as creator_name,
u.avatar as creator_avatar,
u.banned as banned,
cb.id::bool as banned_from_community,
-- community details
c.actor_id as community_actor_id,
c."local" as community_local,
c."name" as community_name,
c.removed as community_removed,
c.deleted as community_deleted,
c.nsfw as community_nsfw,
-- post score data/comment count
coalesce(ct.comments, 0) as number_of_comments,
coalesce(pl.score, 0) as score,
coalesce(pl.upvotes, 0) as upvotes,
coalesce(pl.downvotes, 0) as downvotes,
hot_rank(
coalesce(pl.score , 0), (
case
when (p.published < ('now'::timestamp - '1 month'::interval))
then p.published
else greatest(ct.recent_comment_time, p.published)
end
)
) as hot_rank,
(
case
when (p.published < ('now'::timestamp - '1 month'::interval))
then p.published
else greatest(ct.recent_comment_time, p.published)
end
) as newest_activity_time
from post p
left join user_ u on p.creator_id = u.id
left join community_user_ban cb on p.creator_id = cb.user_id and p.community_id = cb.community_id
left join community c on p.community_id = c.id
left join (
select
post_id,
count(*) as comments,
max(published) as recent_comment_time
from comment
group by post_id
) ct on ct.post_id = p.id
left join (
select
post_id,
sum(score) as score,
sum(score) filter (where score = 1) as upvotes,
-sum(score) filter (where score = -1) as downvotes
from post_like
group by post_id
) pl on pl.post_id = p.id
order by p.id;
create view post_view as
select
pav.*,
us.id as user_id,
us.user_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_read::bool as read,
us.is_saved::bool as saved
from post_aggregates_view pav
cross join lateral (
select
u.id,
coalesce(cf.community_id, 0) as is_subbed,
coalesce(pr.post_id, 0) as is_read,
coalesce(ps.post_id, 0) as is_saved,
coalesce(pl.score, 0) as user_vote
from user_ u
left join community_user_ban cb on u.id = cb.user_id and cb.community_id = pav.community_id
left join community_follower cf on u.id = cf.user_id and cf.community_id = pav.community_id
left join post_read pr on u.id = pr.user_id and pr.post_id = pav.id
left join post_saved ps on u.id = ps.user_id and ps.post_id = pav.id
left join post_like pl on u.id = pl.user_id and pav.id = pl.post_id
) as us
union all
select
pav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from post_aggregates_view pav;
-- The post fast table
create table post_aggregates_fast as select * from post_aggregates_view;
alter table post_aggregates_fast add primary key (id);
-- For the hot rank resorting
create index idx_post_aggregates_fast_hot_rank_published on post_aggregates_fast (hot_rank desc, published desc);
create view post_fast_view as
select
pav.*,
us.id as user_id,
us.user_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_read::bool as read,
us.is_saved::bool as saved
from post_aggregates_fast pav
cross join lateral (
select
u.id,
coalesce(cf.community_id, 0) as is_subbed,
coalesce(pr.post_id, 0) as is_read,
coalesce(ps.post_id, 0) as is_saved,
coalesce(pl.score, 0) as user_vote
from user_ u
left join community_user_ban cb on u.id = cb.user_id and cb.community_id = pav.community_id
left join community_follower cf on u.id = cf.user_id and cf.community_id = pav.community_id
left join post_read pr on u.id = pr.user_id and pr.post_id = pav.id
left join post_saved ps on u.id = ps.user_id and ps.post_id = pav.id
left join post_like pl on u.id = pl.user_id and pav.id = pl.post_id
) as us
union all
select
pav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from post_aggregates_fast pav;
drop trigger refresh_post on post;
create trigger refresh_post
after insert or update or delete
on post
for each row
execute procedure refresh_post();
-- Sample select
-- select id, name from post_fast_view where name like 'test_post' and user_id is null;
-- Sample insert
-- insert into post(name, creator_id, community_id) values ('test_post', 2, 2);
-- Sample delete
-- delete from post where name like 'test_post';
-- Sample update
-- update post set community_id = 4 where name like 'test_post';
create or replace function refresh_post()
returns trigger language plpgsql
as $$
begin
IF (TG_OP = 'DELETE') THEN
delete from post_aggregates_fast where id = OLD.id;
-- Update community number of posts
update community_aggregates_fast set number_of_posts = number_of_posts - 1 where id = OLD.community_id;
ELSIF (TG_OP = 'UPDATE') THEN
delete from post_aggregates_fast where id = OLD.id;
insert into post_aggregates_fast select * from post_aggregates_view where id = NEW.id;
ELSIF (TG_OP = 'INSERT') THEN
insert into post_aggregates_fast select * from post_aggregates_view where id = NEW.id;
-- Update that users number of posts, post score
delete from user_fast where id = NEW.creator_id;
insert into user_fast select * from user_view where id = NEW.creator_id;
-- Update community number of posts
update community_aggregates_fast set number_of_posts = number_of_posts + 1 where id = NEW.community_id;
-- Update the hot rank on the post table
-- TODO this might not correctly update it, using a 1 week interval
update post_aggregates_fast as paf
set hot_rank = pav.hot_rank
from post_aggregates_view as pav
where paf.id = pav.id and (pav.published > ('now'::timestamp - '1 week'::interval));
END IF;
return null;
end $$;
-- Community
-- Redoing the views : Credit eiknat
drop view community_moderator_view;
drop view community_follower_view;
drop view community_user_ban_view;
drop view community_view;
drop view community_aggregates_view;
create view community_aggregates_view as
select
c.id,
c.name,
c.title,
c.description,
c.category_id,
c.creator_id,
c.removed,
c.published,
c.updated,
c.deleted,
c.nsfw,
c.actor_id,
c.local,
c.last_refreshed_at,
u.actor_id as creator_actor_id,
u.local as creator_local,
u.name as creator_name,
u.avatar as creator_avatar,
cat.name as category_name,
coalesce(cf.subs, 0) as number_of_subscribers,
coalesce(cd.posts, 0) as number_of_posts,
coalesce(cd.comments, 0) as number_of_comments,
hot_rank(cf.subs, c.published) as hot_rank
from community c
left join user_ u on c.creator_id = u.id
left join category cat on c.category_id = cat.id
left join (
select
p.community_id,
count(distinct p.id) as posts,
count(distinct ct.id) as comments
from post p
join comment ct on p.id = ct.post_id
group by p.community_id
) cd on cd.community_id = c.id
left join (
select
community_id,
count(*) as subs
from community_follower
group by community_id
) cf on cf.community_id = c.id;
create view community_view as
select
cv.*,
us.user as user_id,
us.is_subbed::bool as subscribed
from community_aggregates_view cv
cross join lateral (
select
u.id as user,
coalesce(cf.community_id, 0) as is_subbed
from user_ u
left join community_follower cf on u.id = cf.user_id and cf.community_id = cv.id
) as us
union all
select
cv.*,
null as user_id,
null as subscribed
from community_aggregates_view cv;
create view community_moderator_view as
select
cm.*,
u.actor_id as user_actor_id,
u.local as user_local,
u.name as user_name,
u.avatar as avatar,
c.actor_id as community_actor_id,
c.local as community_local,
c.name as community_name
from community_moderator cm
left join user_ u on cm.user_id = u.id
left join community c on cm.community_id = c.id;
create view community_follower_view as
select
cf.*,
u.actor_id as user_actor_id,
u.local as user_local,
u.name as user_name,
u.avatar as avatar,
c.actor_id as community_actor_id,
c.local as community_local,
c.name as community_name
from community_follower cf
left join user_ u on cf.user_id = u.id
left join community c on cf.community_id = c.id;
create view community_user_ban_view as
select
cb.*,
u.actor_id as user_actor_id,
u.local as user_local,
u.name as user_name,
u.avatar as avatar,
c.actor_id as community_actor_id,
c.local as community_local,
c.name as community_name
from community_user_ban cb
left join user_ u on cb.user_id = u.id
left join community c on cb.community_id = c.id;
-- The community fast table
create table community_aggregates_fast as select * from community_aggregates_view;
alter table community_aggregates_fast add primary key (id);
create view community_fast_view as
select
ac.*,
u.id as user_id,
(select cf.id::boolean from community_follower cf where u.id = cf.user_id and ac.id = cf.community_id) as subscribed
from user_ u
cross join (
select
ca.*
from community_aggregates_fast ca
) ac
union all
select
caf.*,
null as user_id,
null as subscribed
from community_aggregates_fast caf;
drop trigger refresh_community on community;
create trigger refresh_community
after insert or update or delete
on community
for each row
execute procedure refresh_community();
-- Sample select
-- select * from community_fast_view where name like 'test_community_name' and user_id is null;
-- Sample insert
-- insert into community(name, title, category_id, creator_id) values ('test_community_name', 'test_community_title', 1, 2);
-- Sample delete
-- delete from community where name like 'test_community_name';
-- Sample update
-- update community set title = 'test_community_title_2' where name like 'test_community_name';
create or replace function refresh_community()
returns trigger language plpgsql
as $$
begin
IF (TG_OP = 'DELETE') THEN
delete from community_aggregates_fast where id = OLD.id;
ELSIF (TG_OP = 'UPDATE') THEN
delete from community_aggregates_fast where id = OLD.id;
insert into community_aggregates_fast select * from community_aggregates_view where id = NEW.id;
-- Update user view due to owner changes
delete from user_fast where id = NEW.creator_id;
insert into user_fast select * from user_view where id = NEW.creator_id;
-- Update post view due to community changes
delete from post_aggregates_fast where community_id = NEW.id;
insert into post_aggregates_fast select * from post_aggregates_view where community_id = NEW.id;
-- TODO make sure this shows up in the users page ?
ELSIF (TG_OP = 'INSERT') THEN
insert into community_aggregates_fast select * from community_aggregates_view where id = NEW.id;
END IF;
return null;
end $$;
-- Comment
drop view user_mention_view;
drop view comment_view;
drop view comment_aggregates_view;
create view comment_aggregates_view as
select
ct.*,
-- community details
p.community_id,
c.actor_id as community_actor_id,
c."local" as community_local,
c."name" as community_name,
-- creator details
u.banned as banned,
coalesce(cb.id, 0)::bool as banned_from_community,
u.actor_id as creator_actor_id,
u.local as creator_local,
u.name as creator_name,
u.avatar as creator_avatar,
-- score details
coalesce(cl.total, 0) as score,
coalesce(cl.up, 0) as upvotes,
coalesce(cl.down, 0) as downvotes,
hot_rank(coalesce(cl.total, 0), ct.published) as hot_rank
from comment ct
left join post p on ct.post_id = p.id
left join community c on p.community_id = c.id
left join user_ u on ct.creator_id = u.id
left join community_user_ban cb on ct.creator_id = cb.user_id and p.id = ct.post_id and p.community_id = cb.community_id
left join (
select
l.comment_id as id,
sum(l.score) as total,
count(case when l.score = 1 then 1 else null end) as up,
count(case when l.score = -1 then 1 else null end) as down
from comment_like l
group by comment_id
) as cl on cl.id = ct.id;
create or replace view comment_view as (
select
cav.*,
us.user_id as user_id,
us.my_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_saved::bool as saved
from comment_aggregates_view cav
cross join lateral (
select
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
coalesce(cf.id, 0) as is_subbed,
coalesce(cs.id, 0) as is_saved
from user_ u
left join comment_like cl on u.id = cl.user_id and cav.id = cl.comment_id
left join comment_saved cs on u.id = cs.user_id and cs.comment_id = cav.id
left join community_follower cf on u.id = cf.user_id and cav.community_id = cf.community_id
) as us
union all
select
cav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from comment_aggregates_view cav
);
-- The fast view
create table comment_aggregates_fast as select * from comment_aggregates_view;
alter table comment_aggregates_fast add primary key (id);
create view comment_fast_view as
select
cav.*,
us.user_id as user_id,
us.my_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_saved::bool as saved
from comment_aggregates_fast cav
cross join lateral (
select
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
coalesce(cf.id, 0) as is_subbed,
coalesce(cs.id, 0) as is_saved
from user_ u
left join comment_like cl on u.id = cl.user_id and cav.id = cl.comment_id
left join comment_saved cs on u.id = cs.user_id and cs.comment_id = cav.id
left join community_follower cf on u.id = cf.user_id and cav.community_id = cf.community_id
) as us
union all
select
cav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from comment_aggregates_fast cav;
-- Do the reply_view referencing the comment_fast_view
create view reply_fast_view as
with closereply as (
select
c2.id,
c2.creator_id as sender_id,
c.creator_id as recipient_id
from comment c
inner join comment c2 on c.id = c2.parent_id
where c2.creator_id != c.creator_id
-- Do union where post is null
union
select
c.id,
c.creator_id as sender_id,
p.creator_id as recipient_id
from comment c, post p
where c.post_id = p.id and c.parent_id is null and c.creator_id != p.creator_id
)
select cv.*,
closereply.recipient_id
from comment_fast_view cv, closereply
where closereply.id = cv.id
;
-- user mention
create view user_mention_view as
select
c.id,
um.id as user_mention_id,
c.creator_id,
c.creator_actor_id,
c.creator_local,
c.post_id,
c.parent_id,
c.content,
c.removed,
um.read,
c.published,
c.updated,
c.deleted,
c.community_id,
c.community_actor_id,
c.community_local,
c.community_name,
c.banned,
c.banned_from_community,
c.creator_name,
c.creator_avatar,
c.score,
c.upvotes,
c.downvotes,
c.hot_rank,
c.user_id,
c.my_vote,
c.saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_mention um, comment_view c
where um.comment_id = c.id;
create view user_mention_fast_view as
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_ u
cross join (
select
ca.*
from comment_aggregates_fast ca
) ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
left join user_mention um on um.comment_id = ac.id
union all
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
null as user_id,
null as my_vote,
null as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from comment_aggregates_fast ac
left join user_mention um on um.comment_id = ac.id
;
drop trigger refresh_comment on comment;
create trigger refresh_comment
after insert or update or delete
on comment
for each row
execute procedure refresh_comment();
-- Sample select
-- select * from comment_fast_view where content = 'test_comment' and user_id is null;
-- Sample insert
-- insert into comment(creator_id, post_id, content) values (2, 2, 'test_comment');
-- Sample delete
-- delete from comment where content like 'test_comment';
-- Sample update
-- update comment set removed = true where content like 'test_comment';
create or replace function refresh_comment()
returns trigger language plpgsql
as $$
begin
IF (TG_OP = 'DELETE') THEN
delete from comment_aggregates_fast where id = OLD.id;
-- Update community number of comments
update community_aggregates_fast as caf
set number_of_comments = number_of_comments - 1
from post as p
where caf.id = p.community_id and p.id = OLD.post_id;
ELSIF (TG_OP = 'UPDATE') THEN
delete from comment_aggregates_fast where id = OLD.id;
insert into comment_aggregates_fast select * from comment_aggregates_view where id = NEW.id;
ELSIF (TG_OP = 'INSERT') THEN
insert into comment_aggregates_fast select * from comment_aggregates_view where id = NEW.id;
-- Update user view due to comment count
update user_fast
set number_of_comments = number_of_comments + 1
where id = NEW.creator_id;
-- Update post view due to comment count, new comment activity time, but only on new posts
-- TODO this could be done more efficiently
delete from post_aggregates_fast where id = NEW.post_id;
insert into post_aggregates_fast select * from post_aggregates_view where id = NEW.post_id;
-- Force the hot rank as zero on week-older posts
update post_aggregates_fast as paf
set hot_rank = 0
where paf.id = NEW.post_id and (paf.published < ('now'::timestamp - '1 week'::interval));
-- Update community number of comments
update community_aggregates_fast as caf
set number_of_comments = number_of_comments + 1
from post as p
where caf.id = p.community_id and p.id = NEW.post_id;
END IF;
return null;
end $$;
-- post_like
-- select id, score, my_vote from post_fast_view where id = 29 and user_id = 4;
-- Sample insert
-- insert into post_like(user_id, post_id, score) values (4, 29, 1);
-- Sample delete
-- delete from post_like where user_id = 4 and post_id = 29;
-- Sample update
-- update post_like set score = -1 where user_id = 4 and post_id = 29;
-- TODO test this a LOT
create or replace function refresh_post_like()
returns trigger language plpgsql
as $$
begin
IF (TG_OP = 'DELETE') THEN
update post_aggregates_fast
set score = case
when (OLD.score = 1) then score - 1
else score + 1 end,
upvotes = case
when (OLD.score = 1) then upvotes - 1
else upvotes end,
downvotes = case
when (OLD.score = -1) then downvotes - 1
else downvotes end
where id = OLD.post_id;
ELSIF (TG_OP = 'INSERT') THEN
update post_aggregates_fast
set score = case
when (NEW.score = 1) then score + 1
else score - 1 end,
upvotes = case
when (NEW.score = 1) then upvotes + 1
else upvotes end,
downvotes = case
when (NEW.score = -1) then downvotes + 1
else downvotes end
where id = NEW.post_id;
END IF;
return null;
end $$;
drop trigger refresh_post_like on post_like;
create trigger refresh_post_like
after insert or delete
on post_like
for each row
execute procedure refresh_post_like();
-- comment_like
-- select id, score, my_vote from comment_fast_view where id = 29 and user_id = 4;
-- Sample insert
-- insert into comment_like(user_id, comment_id, post_id, score) values (4, 29, 51, 1);
-- Sample delete
-- delete from comment_like where user_id = 4 and comment_id = 29;
-- Sample update
-- update comment_like set score = -1 where user_id = 4 and comment_id = 29;
create or replace function refresh_comment_like()
returns trigger language plpgsql
as $$
begin
-- TODO possibly select from comment_fast to get previous scores, instead of re-fetching the views?
IF (TG_OP = 'DELETE') THEN
update comment_aggregates_fast
set score = case
when (OLD.score = 1) then score - 1
else score + 1 end,
upvotes = case
when (OLD.score = 1) then upvotes - 1
else upvotes end,
downvotes = case
when (OLD.score = -1) then downvotes - 1
else downvotes end
where id = OLD.comment_id;
ELSIF (TG_OP = 'INSERT') THEN
update comment_aggregates_fast
set score = case
when (NEW.score = 1) then score + 1
else score - 1 end,
upvotes = case
when (NEW.score = 1) then upvotes + 1
else upvotes end,
downvotes = case
when (NEW.score = -1) then downvotes + 1
else downvotes end
where id = NEW.comment_id;
END IF;
return null;
end $$;
drop trigger refresh_comment_like on comment_like;
create trigger refresh_comment_like
after insert or delete
on comment_like
for each row
execute procedure refresh_comment_like();
-- Community user ban
drop trigger refresh_community_user_ban on community_user_ban;
create trigger refresh_community_user_ban
after insert or delete -- Note this is missing after update
on community_user_ban
for each row
execute procedure refresh_community_user_ban();
-- select creator_name, banned_from_community from comment_fast_view where user_id = 4 and content = 'test_before_ban';
-- select creator_name, banned_from_community, community_id from comment_aggregates_fast where content = 'test_before_ban';
-- Sample insert
-- insert into comment(creator_id, post_id, content) values (1198, 341, 'test_before_ban');
-- insert into community_user_ban(community_id, user_id) values (2, 1198);
-- Sample delete
-- delete from community_user_ban where user_id = 1198 and community_id = 2;
-- delete from comment where content = 'test_before_ban';
-- update comment_aggregates_fast set banned_from_community = false where creator_id = 1198 and community_id = 2;
create or replace function refresh_community_user_ban()
returns trigger language plpgsql
as $$
begin
-- TODO possibly select from comment_fast to get previous scores, instead of re-fetching the views?
IF (TG_OP = 'DELETE') THEN
update comment_aggregates_fast set banned_from_community = false where creator_id = OLD.user_id and community_id = OLD.community_id;
update post_aggregates_fast set banned_from_community = false where creator_id = OLD.user_id and community_id = OLD.community_id;
ELSIF (TG_OP = 'INSERT') THEN
update comment_aggregates_fast set banned_from_community = true where creator_id = NEW.user_id and community_id = NEW.community_id;
update post_aggregates_fast set banned_from_community = true where creator_id = NEW.user_id and community_id = NEW.community_id;
END IF;
return null;
end $$;
-- Community follower
drop trigger refresh_community_follower on community_follower;
create trigger refresh_community_follower
after insert or delete -- Note this is missing after update
on community_follower
for each row
execute procedure refresh_community_follower();
create or replace function refresh_community_follower()
returns trigger language plpgsql
as $$
begin
IF (TG_OP = 'DELETE') THEN
update community_aggregates_fast set number_of_subscribers = number_of_subscribers - 1 where id = OLD.community_id;
ELSIF (TG_OP = 'INSERT') THEN
update community_aggregates_fast set number_of_subscribers = number_of_subscribers + 1 where id = NEW.community_id;
END IF;
return null;
end $$;

View File

@ -1,388 +0,0 @@
drop view user_mention_view;
drop view reply_fast_view;
drop view comment_fast_view;
drop view comment_view;
drop view user_mention_fast_view;
drop table comment_aggregates_fast;
drop view comment_aggregates_view;
create view comment_aggregates_view as
select
ct.*,
-- community details
p.community_id,
c.actor_id as community_actor_id,
c."local" as community_local,
c."name" as community_name,
-- creator details
u.banned as banned,
coalesce(cb.id, 0)::bool as banned_from_community,
u.actor_id as creator_actor_id,
u.local as creator_local,
u.name as creator_name,
u.avatar as creator_avatar,
-- score details
coalesce(cl.total, 0) as score,
coalesce(cl.up, 0) as upvotes,
coalesce(cl.down, 0) as downvotes,
hot_rank(coalesce(cl.total, 0), ct.published) as hot_rank
from comment ct
left join post p on ct.post_id = p.id
left join community c on p.community_id = c.id
left join user_ u on ct.creator_id = u.id
left join community_user_ban cb on ct.creator_id = cb.user_id and p.id = ct.post_id and p.community_id = cb.community_id
left join (
select
l.comment_id as id,
sum(l.score) as total,
count(case when l.score = 1 then 1 else null end) as up,
count(case when l.score = -1 then 1 else null end) as down
from comment_like l
group by comment_id
) as cl on cl.id = ct.id;
create or replace view comment_view as (
select
cav.*,
us.user_id as user_id,
us.my_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_saved::bool as saved
from comment_aggregates_view cav
cross join lateral (
select
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
coalesce(cf.id, 0) as is_subbed,
coalesce(cs.id, 0) as is_saved
from user_ u
left join comment_like cl on u.id = cl.user_id and cav.id = cl.comment_id
left join comment_saved cs on u.id = cs.user_id and cs.comment_id = cav.id
left join community_follower cf on u.id = cf.user_id and cav.community_id = cf.community_id
) as us
union all
select
cav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from comment_aggregates_view cav
);
create table comment_aggregates_fast as select * from comment_aggregates_view;
alter table comment_aggregates_fast add primary key (id);
create view comment_fast_view as
select
cav.*,
us.user_id as user_id,
us.my_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_saved::bool as saved
from comment_aggregates_fast cav
cross join lateral (
select
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
coalesce(cf.id, 0) as is_subbed,
coalesce(cs.id, 0) as is_saved
from user_ u
left join comment_like cl on u.id = cl.user_id and cav.id = cl.comment_id
left join comment_saved cs on u.id = cs.user_id and cs.comment_id = cav.id
left join community_follower cf on u.id = cf.user_id and cav.community_id = cf.community_id
) as us
union all
select
cav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from comment_aggregates_fast cav;
create view user_mention_view as
select
c.id,
um.id as user_mention_id,
c.creator_id,
c.creator_actor_id,
c.creator_local,
c.post_id,
c.parent_id,
c.content,
c.removed,
um.read,
c.published,
c.updated,
c.deleted,
c.community_id,
c.community_actor_id,
c.community_local,
c.community_name,
c.banned,
c.banned_from_community,
c.creator_name,
c.creator_avatar,
c.score,
c.upvotes,
c.downvotes,
c.hot_rank,
c.user_id,
c.my_vote,
c.saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_mention um, comment_view c
where um.comment_id = c.id;
create view user_mention_fast_view as
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_ u
cross join (
select
ca.*
from comment_aggregates_fast ca
) ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
left join user_mention um on um.comment_id = ac.id
union all
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
null as user_id,
null as my_vote,
null as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from comment_aggregates_fast ac
left join user_mention um on um.comment_id = ac.id
;
-- Do the reply_view referencing the comment_fast_view
create view reply_fast_view as
with closereply as (
select
c2.id,
c2.creator_id as sender_id,
c.creator_id as recipient_id
from comment c
inner join comment c2 on c.id = c2.parent_id
where c2.creator_id != c.creator_id
-- Do union where post is null
union
select
c.id,
c.creator_id as sender_id,
p.creator_id as recipient_id
from comment c, post p
where c.post_id = p.id and c.parent_id is null and c.creator_id != p.creator_id
)
select cv.*,
closereply.recipient_id
from comment_fast_view cv, closereply
where closereply.id = cv.id
;
-- add creator_published to the post view
drop view post_fast_view;
drop table post_aggregates_fast;
drop view post_view;
drop view post_aggregates_view;
create view post_aggregates_view as
select
p.*,
-- creator details
u.actor_id as creator_actor_id,
u."local" as creator_local,
u."name" as creator_name,
u.avatar as creator_avatar,
u.banned as banned,
cb.id::bool as banned_from_community,
-- community details
c.actor_id as community_actor_id,
c."local" as community_local,
c."name" as community_name,
c.removed as community_removed,
c.deleted as community_deleted,
c.nsfw as community_nsfw,
-- post score data/comment count
coalesce(ct.comments, 0) as number_of_comments,
coalesce(pl.score, 0) as score,
coalesce(pl.upvotes, 0) as upvotes,
coalesce(pl.downvotes, 0) as downvotes,
hot_rank(
coalesce(pl.score , 0), (
case
when (p.published < ('now'::timestamp - '1 month'::interval))
then p.published
else greatest(ct.recent_comment_time, p.published)
end
)
) as hot_rank,
(
case
when (p.published < ('now'::timestamp - '1 month'::interval))
then p.published
else greatest(ct.recent_comment_time, p.published)
end
) as newest_activity_time
from post p
left join user_ u on p.creator_id = u.id
left join community_user_ban cb on p.creator_id = cb.user_id and p.community_id = cb.community_id
left join community c on p.community_id = c.id
left join (
select
post_id,
count(*) as comments,
max(published) as recent_comment_time
from comment
group by post_id
) ct on ct.post_id = p.id
left join (
select
post_id,
sum(score) as score,
sum(score) filter (where score = 1) as upvotes,
-sum(score) filter (where score = -1) as downvotes
from post_like
group by post_id
) pl on pl.post_id = p.id
order by p.id;
create view post_view as
select
pav.*,
us.id as user_id,
us.user_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_read::bool as read,
us.is_saved::bool as saved
from post_aggregates_view pav
cross join lateral (
select
u.id,
coalesce(cf.community_id, 0) as is_subbed,
coalesce(pr.post_id, 0) as is_read,
coalesce(ps.post_id, 0) as is_saved,
coalesce(pl.score, 0) as user_vote
from user_ u
left join community_user_ban cb on u.id = cb.user_id and cb.community_id = pav.community_id
left join community_follower cf on u.id = cf.user_id and cf.community_id = pav.community_id
left join post_read pr on u.id = pr.user_id and pr.post_id = pav.id
left join post_saved ps on u.id = ps.user_id and ps.post_id = pav.id
left join post_like pl on u.id = pl.user_id and pav.id = pl.post_id
) as us
union all
select
pav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from post_aggregates_view pav;
create table post_aggregates_fast as select * from post_aggregates_view;
alter table post_aggregates_fast add primary key (id);
create view post_fast_view as
select
pav.*,
us.id as user_id,
us.user_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_read::bool as read,
us.is_saved::bool as saved
from post_aggregates_fast pav
cross join lateral (
select
u.id,
coalesce(cf.community_id, 0) as is_subbed,
coalesce(pr.post_id, 0) as is_read,
coalesce(ps.post_id, 0) as is_saved,
coalesce(pl.score, 0) as user_vote
from user_ u
left join community_user_ban cb on u.id = cb.user_id and cb.community_id = pav.community_id
left join community_follower cf on u.id = cf.user_id and cf.community_id = pav.community_id
left join post_read pr on u.id = pr.user_id and pr.post_id = pav.id
left join post_saved ps on u.id = ps.user_id and ps.post_id = pav.id
left join post_like pl on u.id = pl.user_id and pav.id = pl.post_id
) as us
union all
select
pav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from post_aggregates_fast pav;

View File

@ -1,390 +0,0 @@
drop view user_mention_view;
drop view reply_fast_view;
drop view comment_fast_view;
drop view comment_view;
drop view user_mention_fast_view;
drop table comment_aggregates_fast;
drop view comment_aggregates_view;
create view comment_aggregates_view as
select
ct.*,
-- community details
p.community_id,
c.actor_id as community_actor_id,
c."local" as community_local,
c."name" as community_name,
-- creator details
u.banned as banned,
coalesce(cb.id, 0)::bool as banned_from_community,
u.actor_id as creator_actor_id,
u.local as creator_local,
u.name as creator_name,
u.published as creator_published,
u.avatar as creator_avatar,
-- score details
coalesce(cl.total, 0) as score,
coalesce(cl.up, 0) as upvotes,
coalesce(cl.down, 0) as downvotes,
hot_rank(coalesce(cl.total, 0), ct.published) as hot_rank
from comment ct
left join post p on ct.post_id = p.id
left join community c on p.community_id = c.id
left join user_ u on ct.creator_id = u.id
left join community_user_ban cb on ct.creator_id = cb.user_id and p.id = ct.post_id and p.community_id = cb.community_id
left join (
select
l.comment_id as id,
sum(l.score) as total,
count(case when l.score = 1 then 1 else null end) as up,
count(case when l.score = -1 then 1 else null end) as down
from comment_like l
group by comment_id
) as cl on cl.id = ct.id;
create or replace view comment_view as (
select
cav.*,
us.user_id as user_id,
us.my_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_saved::bool as saved
from comment_aggregates_view cav
cross join lateral (
select
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
coalesce(cf.id, 0) as is_subbed,
coalesce(cs.id, 0) as is_saved
from user_ u
left join comment_like cl on u.id = cl.user_id and cav.id = cl.comment_id
left join comment_saved cs on u.id = cs.user_id and cs.comment_id = cav.id
left join community_follower cf on u.id = cf.user_id and cav.community_id = cf.community_id
) as us
union all
select
cav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from comment_aggregates_view cav
);
create table comment_aggregates_fast as select * from comment_aggregates_view;
alter table comment_aggregates_fast add primary key (id);
create view comment_fast_view as
select
cav.*,
us.user_id as user_id,
us.my_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_saved::bool as saved
from comment_aggregates_fast cav
cross join lateral (
select
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
coalesce(cf.id, 0) as is_subbed,
coalesce(cs.id, 0) as is_saved
from user_ u
left join comment_like cl on u.id = cl.user_id and cav.id = cl.comment_id
left join comment_saved cs on u.id = cs.user_id and cs.comment_id = cav.id
left join community_follower cf on u.id = cf.user_id and cav.community_id = cf.community_id
) as us
union all
select
cav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from comment_aggregates_fast cav;
create view user_mention_view as
select
c.id,
um.id as user_mention_id,
c.creator_id,
c.creator_actor_id,
c.creator_local,
c.post_id,
c.parent_id,
c.content,
c.removed,
um.read,
c.published,
c.updated,
c.deleted,
c.community_id,
c.community_actor_id,
c.community_local,
c.community_name,
c.banned,
c.banned_from_community,
c.creator_name,
c.creator_avatar,
c.score,
c.upvotes,
c.downvotes,
c.hot_rank,
c.user_id,
c.my_vote,
c.saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_mention um, comment_view c
where um.comment_id = c.id;
create view user_mention_fast_view as
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_ u
cross join (
select
ca.*
from comment_aggregates_fast ca
) ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
left join user_mention um on um.comment_id = ac.id
union all
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
null as user_id,
null as my_vote,
null as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from comment_aggregates_fast ac
left join user_mention um on um.comment_id = ac.id
;
-- Do the reply_view referencing the comment_fast_view
create view reply_fast_view as
with closereply as (
select
c2.id,
c2.creator_id as sender_id,
c.creator_id as recipient_id
from comment c
inner join comment c2 on c.id = c2.parent_id
where c2.creator_id != c.creator_id
-- Do union where post is null
union
select
c.id,
c.creator_id as sender_id,
p.creator_id as recipient_id
from comment c, post p
where c.post_id = p.id and c.parent_id is null and c.creator_id != p.creator_id
)
select cv.*,
closereply.recipient_id
from comment_fast_view cv, closereply
where closereply.id = cv.id
;
-- add creator_published to the post view
drop view post_fast_view;
drop table post_aggregates_fast;
drop view post_view;
drop view post_aggregates_view;
create view post_aggregates_view as
select
p.*,
-- creator details
u.actor_id as creator_actor_id,
u."local" as creator_local,
u."name" as creator_name,
u.published as creator_published,
u.avatar as creator_avatar,
u.banned as banned,
cb.id::bool as banned_from_community,
-- community details
c.actor_id as community_actor_id,
c."local" as community_local,
c."name" as community_name,
c.removed as community_removed,
c.deleted as community_deleted,
c.nsfw as community_nsfw,
-- post score data/comment count
coalesce(ct.comments, 0) as number_of_comments,
coalesce(pl.score, 0) as score,
coalesce(pl.upvotes, 0) as upvotes,
coalesce(pl.downvotes, 0) as downvotes,
hot_rank(
coalesce(pl.score , 0), (
case
when (p.published < ('now'::timestamp - '1 month'::interval))
then p.published
else greatest(ct.recent_comment_time, p.published)
end
)
) as hot_rank,
(
case
when (p.published < ('now'::timestamp - '1 month'::interval))
then p.published
else greatest(ct.recent_comment_time, p.published)
end
) as newest_activity_time
from post p
left join user_ u on p.creator_id = u.id
left join community_user_ban cb on p.creator_id = cb.user_id and p.community_id = cb.community_id
left join community c on p.community_id = c.id
left join (
select
post_id,
count(*) as comments,
max(published) as recent_comment_time
from comment
group by post_id
) ct on ct.post_id = p.id
left join (
select
post_id,
sum(score) as score,
sum(score) filter (where score = 1) as upvotes,
-sum(score) filter (where score = -1) as downvotes
from post_like
group by post_id
) pl on pl.post_id = p.id
order by p.id;
create view post_view as
select
pav.*,
us.id as user_id,
us.user_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_read::bool as read,
us.is_saved::bool as saved
from post_aggregates_view pav
cross join lateral (
select
u.id,
coalesce(cf.community_id, 0) as is_subbed,
coalesce(pr.post_id, 0) as is_read,
coalesce(ps.post_id, 0) as is_saved,
coalesce(pl.score, 0) as user_vote
from user_ u
left join community_user_ban cb on u.id = cb.user_id and cb.community_id = pav.community_id
left join community_follower cf on u.id = cf.user_id and cf.community_id = pav.community_id
left join post_read pr on u.id = pr.user_id and pr.post_id = pav.id
left join post_saved ps on u.id = ps.user_id and ps.post_id = pav.id
left join post_like pl on u.id = pl.user_id and pav.id = pl.post_id
) as us
union all
select
pav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from post_aggregates_view pav;
create table post_aggregates_fast as select * from post_aggregates_view;
alter table post_aggregates_fast add primary key (id);
create view post_fast_view as
select
pav.*,
us.id as user_id,
us.user_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_read::bool as read,
us.is_saved::bool as saved
from post_aggregates_fast pav
cross join lateral (
select
u.id,
coalesce(cf.community_id, 0) as is_subbed,
coalesce(pr.post_id, 0) as is_read,
coalesce(ps.post_id, 0) as is_saved,
coalesce(pl.score, 0) as user_vote
from user_ u
left join community_user_ban cb on u.id = cb.user_id and cb.community_id = pav.community_id
left join community_follower cf on u.id = cf.user_id and cf.community_id = pav.community_id
left join post_read pr on u.id = pr.user_id and pr.post_id = pav.id
left join post_saved ps on u.id = ps.user_id and ps.post_id = pav.id
left join post_like pl on u.id = pl.user_id and pav.id = pl.post_id
) as us
union all
select
pav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as read,
null as saved
from post_aggregates_fast pav;

View File

@ -1,249 +0,0 @@
drop view user_mention_view;
drop view reply_fast_view;
drop view comment_fast_view;
drop view comment_view;
drop view user_mention_fast_view;
drop table comment_aggregates_fast;
drop view comment_aggregates_view;
create view comment_aggregates_view as
select
ct.*,
-- community details
p.community_id,
c.actor_id as community_actor_id,
c."local" as community_local,
c."name" as community_name,
-- creator details
u.banned as banned,
coalesce(cb.id, 0)::bool as banned_from_community,
u.actor_id as creator_actor_id,
u.local as creator_local,
u.name as creator_name,
u.published as creator_published,
u.avatar as creator_avatar,
-- score details
coalesce(cl.total, 0) as score,
coalesce(cl.up, 0) as upvotes,
coalesce(cl.down, 0) as downvotes,
hot_rank(coalesce(cl.total, 0), ct.published) as hot_rank
from comment ct
left join post p on ct.post_id = p.id
left join community c on p.community_id = c.id
left join user_ u on ct.creator_id = u.id
left join community_user_ban cb on ct.creator_id = cb.user_id and p.id = ct.post_id and p.community_id = cb.community_id
left join (
select
l.comment_id as id,
sum(l.score) as total,
count(case when l.score = 1 then 1 else null end) as up,
count(case when l.score = -1 then 1 else null end) as down
from comment_like l
group by comment_id
) as cl on cl.id = ct.id;
create or replace view comment_view as (
select
cav.*,
us.user_id as user_id,
us.my_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_saved::bool as saved
from comment_aggregates_view cav
cross join lateral (
select
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
coalesce(cf.id, 0) as is_subbed,
coalesce(cs.id, 0) as is_saved
from user_ u
left join comment_like cl on u.id = cl.user_id and cav.id = cl.comment_id
left join comment_saved cs on u.id = cs.user_id and cs.comment_id = cav.id
left join community_follower cf on u.id = cf.user_id and cav.community_id = cf.community_id
) as us
union all
select
cav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from comment_aggregates_view cav
);
create table comment_aggregates_fast as select * from comment_aggregates_view;
alter table comment_aggregates_fast add primary key (id);
create view comment_fast_view as
select
cav.*,
us.user_id as user_id,
us.my_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_saved::bool as saved
from comment_aggregates_fast cav
cross join lateral (
select
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
coalesce(cf.id, 0) as is_subbed,
coalesce(cs.id, 0) as is_saved
from user_ u
left join comment_like cl on u.id = cl.user_id and cav.id = cl.comment_id
left join comment_saved cs on u.id = cs.user_id and cs.comment_id = cav.id
left join community_follower cf on u.id = cf.user_id and cav.community_id = cf.community_id
) as us
union all
select
cav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from comment_aggregates_fast cav;
create view user_mention_view as
select
c.id,
um.id as user_mention_id,
c.creator_id,
c.creator_actor_id,
c.creator_local,
c.post_id,
c.parent_id,
c.content,
c.removed,
um.read,
c.published,
c.updated,
c.deleted,
c.community_id,
c.community_actor_id,
c.community_local,
c.community_name,
c.banned,
c.banned_from_community,
c.creator_name,
c.creator_avatar,
c.score,
c.upvotes,
c.downvotes,
c.hot_rank,
c.user_id,
c.my_vote,
c.saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_mention um, comment_view c
where um.comment_id = c.id;
create view user_mention_fast_view as
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_ u
cross join (
select
ca.*
from comment_aggregates_fast ca
) ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
left join user_mention um on um.comment_id = ac.id
union all
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
null as user_id,
null as my_vote,
null as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from comment_aggregates_fast ac
left join user_mention um on um.comment_id = ac.id
;
-- Do the reply_view referencing the comment_fast_view
create view reply_fast_view as
with closereply as (
select
c2.id,
c2.creator_id as sender_id,
c.creator_id as recipient_id
from comment c
inner join comment c2 on c.id = c2.parent_id
where c2.creator_id != c.creator_id
-- Do union where post is null
union
select
c.id,
c.creator_id as sender_id,
p.creator_id as recipient_id
from comment c, post p
where c.post_id = p.id and c.parent_id is null and c.creator_id != p.creator_id
)
select cv.*,
closereply.recipient_id
from comment_fast_view cv, closereply
where closereply.id = cv.id
;

View File

@ -1,254 +0,0 @@
drop view user_mention_view;
drop view reply_fast_view;
drop view comment_fast_view;
drop view comment_view;
drop view user_mention_fast_view;
drop table comment_aggregates_fast;
drop view comment_aggregates_view;
create view comment_aggregates_view as
select
ct.*,
-- post details
p."name" as post_name,
p.community_id,
-- community details
c.actor_id as community_actor_id,
c."local" as community_local,
c."name" as community_name,
-- creator details
u.banned as banned,
coalesce(cb.id, 0)::bool as banned_from_community,
u.actor_id as creator_actor_id,
u.local as creator_local,
u.name as creator_name,
u.published as creator_published,
u.avatar as creator_avatar,
-- score details
coalesce(cl.total, 0) as score,
coalesce(cl.up, 0) as upvotes,
coalesce(cl.down, 0) as downvotes,
hot_rank(coalesce(cl.total, 0), ct.published) as hot_rank
from comment ct
left join post p on ct.post_id = p.id
left join community c on p.community_id = c.id
left join user_ u on ct.creator_id = u.id
left join community_user_ban cb on ct.creator_id = cb.user_id and p.id = ct.post_id and p.community_id = cb.community_id
left join (
select
l.comment_id as id,
sum(l.score) as total,
count(case when l.score = 1 then 1 else null end) as up,
count(case when l.score = -1 then 1 else null end) as down
from comment_like l
group by comment_id
) as cl on cl.id = ct.id;
create or replace view comment_view as (
select
cav.*,
us.user_id as user_id,
us.my_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_saved::bool as saved
from comment_aggregates_view cav
cross join lateral (
select
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
coalesce(cf.id, 0) as is_subbed,
coalesce(cs.id, 0) as is_saved
from user_ u
left join comment_like cl on u.id = cl.user_id and cav.id = cl.comment_id
left join comment_saved cs on u.id = cs.user_id and cs.comment_id = cav.id
left join community_follower cf on u.id = cf.user_id and cav.community_id = cf.community_id
) as us
union all
select
cav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from comment_aggregates_view cav
);
create table comment_aggregates_fast as select * from comment_aggregates_view;
alter table comment_aggregates_fast add primary key (id);
create view comment_fast_view as
select
cav.*,
us.user_id as user_id,
us.my_vote as my_vote,
us.is_subbed::bool as subscribed,
us.is_saved::bool as saved
from comment_aggregates_fast cav
cross join lateral (
select
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
coalesce(cf.id, 0) as is_subbed,
coalesce(cs.id, 0) as is_saved
from user_ u
left join comment_like cl on u.id = cl.user_id and cav.id = cl.comment_id
left join comment_saved cs on u.id = cs.user_id and cs.comment_id = cav.id
left join community_follower cf on u.id = cf.user_id and cav.community_id = cf.community_id
) as us
union all
select
cav.*,
null as user_id,
null as my_vote,
null as subscribed,
null as saved
from comment_aggregates_fast cav;
create view user_mention_view as
select
c.id,
um.id as user_mention_id,
c.creator_id,
c.creator_actor_id,
c.creator_local,
c.post_id,
c.post_name,
c.parent_id,
c.content,
c.removed,
um.read,
c.published,
c.updated,
c.deleted,
c.community_id,
c.community_actor_id,
c.community_local,
c.community_name,
c.banned,
c.banned_from_community,
c.creator_name,
c.creator_avatar,
c.score,
c.upvotes,
c.downvotes,
c.hot_rank,
c.user_id,
c.my_vote,
c.saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_mention um, comment_view c
where um.comment_id = c.id;
create view user_mention_fast_view as
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.post_name,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
u.id as user_id,
coalesce(cl.score, 0) as my_vote,
(select cs.id::bool from comment_saved cs where u.id = cs.user_id and cs.comment_id = ac.id) as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from user_ u
cross join (
select
ca.*
from comment_aggregates_fast ca
) ac
left join comment_like cl on u.id = cl.user_id and ac.id = cl.comment_id
left join user_mention um on um.comment_id = ac.id
union all
select
ac.id,
um.id as user_mention_id,
ac.creator_id,
ac.creator_actor_id,
ac.creator_local,
ac.post_id,
ac.post_name,
ac.parent_id,
ac.content,
ac.removed,
um.read,
ac.published,
ac.updated,
ac.deleted,
ac.community_id,
ac.community_actor_id,
ac.community_local,
ac.community_name,
ac.banned,
ac.banned_from_community,
ac.creator_name,
ac.creator_avatar,
ac.score,
ac.upvotes,
ac.downvotes,
ac.hot_rank,
null as user_id,
null as my_vote,
null as saved,
um.recipient_id,
(select actor_id from user_ u where u.id = um.recipient_id) as recipient_actor_id,
(select local from user_ u where u.id = um.recipient_id) as recipient_local
from comment_aggregates_fast ac
left join user_mention um on um.comment_id = ac.id
;
-- Do the reply_view referencing the comment_fast_view
create view reply_fast_view as
with closereply as (
select
c2.id,
c2.creator_id as sender_id,
c.creator_id as recipient_id
from comment c
inner join comment c2 on c.id = c2.parent_id
where c2.creator_id != c.creator_id
-- Do union where post is null
union
select
c.id,
c.creator_id as sender_id,
p.creator_id as recipient_id
from comment c, post p
where c.post_id = p.id and c.parent_id is null and c.creator_id != p.creator_id
)
select cv.*,
closereply.recipient_id
from comment_fast_view cv, closereply
where closereply.id = cv.id
;

View File

@ -11,12 +11,6 @@ declare -a arr=(
"https://torrents-csv.ml/service/search?q=wheel&page=1&type_=torrent"
)
## check if ab installed
if ! [ -x "$(command -v ab)" ]; then
echo 'Error: ab (Apache Bench) is not installed. https://httpd.apache.org/docs/2.4/programs/ab.html' >&2
exit 1
fi
## now loop through the above array
for i in "${arr[@]}"
do

View File

@ -15,12 +15,6 @@ declare -a arr=(
"/api/v1/post/list?sort=Hot&type_=All"
)
## check if ab installed
if ! [ -x "$(command -v ab)" ]; then
echo 'Error: ab (Apache Bench) is not installed. https://httpd.apache.org/docs/2.4/programs/ab.html' >&2
exit 1
fi
## now loop through the above array
for path in "${arr[@]}"
do

View File

@ -1,42 +1,31 @@
#!/bin/bash
set -e
# You can import these to http://tatiyants.com/pev/#/plans/new
# Do the views first
echo "explain (analyze, format json) select * from user_fast" > explain.sql
psql -qAt -U lemmy -f explain.sql > user_fast.json
echo "explain (analyze, format json) select * from user_mview" > explain.sql
psql -qAt -U lemmy -f explain.sql > user_view.json
echo "explain (analyze, format json) select * from post_view where user_id is null order by hot_rank desc, published desc" > explain.sql
echo "explain (analyze, format json) select * from post_mview where user_id is null order by hot_rank desc, published desc" > explain.sql
psql -qAt -U lemmy -f explain.sql > post_view.json
echo "explain (analyze, format json) select * from post_fast_view where user_id is null order by hot_rank desc, published desc" > explain.sql
psql -qAt -U lemmy -f explain.sql > post_fast_view.json
echo "explain (analyze, format json) select * from comment_view where user_id is null" > explain.sql
echo "explain (analyze, format json) select * from comment_mview where user_id is null" > explain.sql
psql -qAt -U lemmy -f explain.sql > comment_view.json
echo "explain (analyze, format json) select * from comment_fast_view where user_id is null" > explain.sql
psql -qAt -U lemmy -f explain.sql > comment_fast_view.json
echo "explain (analyze, format json) select * from community_view where user_id is null order by hot_rank desc" > explain.sql
echo "explain (analyze, format json) select * from community_mview where user_id is null order by hot_rank desc" > explain.sql
psql -qAt -U lemmy -f explain.sql > community_view.json
echo "explain (analyze, format json) select * from community_fast_view where user_id is null order by hot_rank desc" > explain.sql
psql -qAt -U lemmy -f explain.sql > community_fast_view.json
echo "explain (analyze, format json) select * from site_view limit 1" > explain.sql
psql -qAt -U lemmy -f explain.sql > site_view.json
echo "explain (analyze, format json) select * from reply_fast_view where user_id = 34 and recipient_id = 34" > explain.sql
psql -qAt -U lemmy -f explain.sql > reply_fast_view.json
echo "explain (analyze, format json) select * from reply_view where user_id = 34 and recipient_id = 34" > explain.sql
psql -qAt -U lemmy -f explain.sql > reply_view.json
echo "explain (analyze, format json) select * from user_mention_view where user_id = 34 and recipient_id = 34" > explain.sql
psql -qAt -U lemmy -f explain.sql > user_mention_view.json
echo "explain (analyze, format json) select * from user_mention_fast_view where user_id = 34 and recipient_id = 34" > explain.sql
psql -qAt -U lemmy -f explain.sql > user_mention_fast_view.json
echo "explain (analyze, format json) select * from user_mention_mview where user_id = 34 and recipient_id = 34" > explain.sql
psql -qAt -U lemmy -f explain.sql > user_mention_mview.json
grep "Execution Time" *.json

View File

@ -1,73 +0,0 @@
use diesel::{result::Error, PgConnection};
use jsonwebtoken::{decode, encode, DecodingKey, EncodingKey, Header, TokenData, Validation};
use lemmy_db::{user::User_, Crud};
use lemmy_utils::{is_email_regex, settings::Settings};
use serde::{Deserialize, Serialize};
type Jwt = String;
#[derive(Debug, Serialize, Deserialize)]
pub struct Claims {
pub id: i32,
pub username: String,
pub iss: String,
pub show_nsfw: bool,
pub theme: String,
pub default_sort_type: i16,
pub default_listing_type: i16,
pub lang: String,
pub avatar: Option<String>,
pub show_avatars: bool,
}
impl Claims {
pub fn decode(jwt: &str) -> Result<TokenData<Claims>, jsonwebtoken::errors::Error> {
let v = Validation {
validate_exp: false,
..Validation::default()
};
decode::<Claims>(
&jwt,
&DecodingKey::from_secret(Settings::get().jwt_secret.as_ref()),
&v,
)
}
pub fn jwt(user: User_, hostname: String) -> Jwt {
let my_claims = Claims {
id: user.id,
username: user.name.to_owned(),
iss: hostname,
show_nsfw: user.show_nsfw,
theme: user.theme.to_owned(),
default_sort_type: user.default_sort_type,
default_listing_type: user.default_listing_type,
lang: user.lang.to_owned(),
avatar: user.avatar.to_owned(),
show_avatars: user.show_avatars.to_owned(),
};
encode(
&Header::default(),
&my_claims,
&EncodingKey::from_secret(Settings::get().jwt_secret.as_ref()),
)
.unwrap()
}
// TODO: move these into user?
pub fn find_by_email_or_username(
conn: &PgConnection,
username_or_email: &str,
) -> Result<User_, Error> {
if is_email_regex(username_or_email) {
User_::find_by_email(conn, username_or_email)
} else {
User_::find_by_username(conn, username_or_email)
}
}
pub fn find_by_jwt(conn: &PgConnection, jwt: &str) -> Result<User_, Error> {
let claims: Claims = Claims::decode(&jwt).expect("Invalid token").claims;
User_::read(&conn, claims.id)
}
}

View File

@ -1,43 +1,7 @@
use crate::{
api::{claims::Claims, APIError, Oper, Perform},
apub::{ApubLikeableType, ApubObjectType},
blocking,
websocket::{
server::{JoinCommunityRoom, SendComment},
UserOperation,
WebsocketInfo,
},
DbPool,
LemmyError,
};
use lemmy_db::{
comment::*,
comment_view::*,
community_view::*,
moderator::*,
naive_now,
post::*,
site_view::*,
user::*,
user_mention::*,
user_view::*,
Crud,
Likeable,
ListingType,
Saveable,
SortType,
};
use lemmy_utils::{
make_apub_endpoint,
remove_slurs,
scrape_text_for_mentions,
send_email,
settings::Settings,
EndpointType,
MentionData,
};
use log::error;
use serde::{Deserialize, Serialize};
use super::*;
use crate::send_email;
use crate::settings::Settings;
use diesel::PgConnection;
use std::str::FromStr;
#[derive(Serialize, Deserialize)]
@ -99,15 +63,8 @@ pub struct GetCommentsResponse {
comments: Vec<CommentView>,
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<CreateComment> {
type Response = CommentResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<CommentResponse, LemmyError> {
impl Perform<CommentResponse> for Oper<CreateComment> {
fn perform(&self, conn: &PgConnection) -> Result<CommentResponse, Error> {
let data: &CreateComment = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -117,6 +74,19 @@ impl Perform for Oper<CreateComment> {
let user_id = claims.id;
let hostname = &format!("https://{}", Settings::get().hostname);
// Check for a community ban
let post = Post::read(&conn, data.post_id)?;
if CommunityUserBanView::get(&conn, user_id, post.community_id).is_ok() {
return Err(APIError::err("community_ban").into());
}
// Check for a site ban
if UserView::read(&conn, user_id)?.banned {
return Err(APIError::err("site_ban").into());
}
let content_slurs_removed = remove_slurs(&data.content.to_owned());
let comment_form = CommentForm {
@ -127,56 +97,115 @@ impl Perform for Oper<CreateComment> {
removed: None,
deleted: None,
read: None,
published: None,
updated: None,
ap_id: "http://fake.com".into(),
local: true,
};
// Check for a community ban
let post_id = data.post_id;
let post = blocking(pool, move |conn| Post::read(conn, post_id)).await??;
let community_id = post.community_id;
let is_banned =
move |conn: &'_ _| CommunityUserBanView::get(conn, user_id, community_id).is_ok();
if blocking(pool, is_banned).await? {
return Err(APIError::err("community_ban").into());
}
// Check for a site ban
let user = blocking(pool, move |conn| User_::read(&conn, user_id)).await??;
if user.banned {
return Err(APIError::err("site_ban").into());
}
let comment_form2 = comment_form.clone();
let inserted_comment =
match blocking(pool, move |conn| Comment::create(&conn, &comment_form2)).await? {
Ok(comment) => comment,
Err(_e) => return Err(APIError::err("couldnt_create_comment").into()),
};
let inserted_comment_id = inserted_comment.id;
let updated_comment: Comment = match blocking(pool, move |conn| {
let apub_id =
make_apub_endpoint(EndpointType::Comment, &inserted_comment_id.to_string()).to_string();
Comment::update_ap_id(&conn, inserted_comment_id, apub_id)
})
.await?
{
let inserted_comment = match Comment::create(&conn, &comment_form) {
Ok(comment) => comment,
Err(_e) => return Err(APIError::err("couldnt_create_comment").into()),
};
updated_comment
.send_create(&user, &self.client, pool)
.await?;
let mut recipient_ids = Vec::new();
// Scan the comment for user mentions, add those rows
let mentions = scrape_text_for_mentions(&comment_form.content);
let recipient_ids =
send_local_notifs(mentions, updated_comment.clone(), user.clone(), post, pool).await?;
let extracted_usernames = extract_usernames(&comment_form.content);
for username_mention in &extracted_usernames {
if let Ok(mention_user) = User_::read_from_name(&conn, (*username_mention).to_string()) {
// You can't mention yourself
// At some point, make it so you can't tag the parent creator either
// This can cause two notifications, one for reply and the other for mention
if mention_user.id != user_id {
recipient_ids.push(mention_user.id);
let user_mention_form = UserMentionForm {
recipient_id: mention_user.id,
comment_id: inserted_comment.id,
read: None,
};
// Allow this to fail softly, since comment edits might re-update or replace it
// Let the uniqueness handle this fail
match UserMention::create(&conn, &user_mention_form) {
Ok(_mention) => (),
Err(_e) => eprintln!("{}", &_e),
};
// Send an email to those users that have notifications on
if mention_user.send_notifications_to_email {
if let Some(mention_email) = mention_user.email {
let subject = &format!(
"{} - Mentioned by {}",
Settings::get().hostname,
claims.username
);
let html = &format!(
"<h1>User Mention</h1><br><div>{} - {}</div><br><a href={}/inbox>inbox</a>",
claims.username, comment_form.content, hostname
);
match send_email(subject, &mention_email, &mention_user.name, html) {
Ok(_o) => _o,
Err(e) => eprintln!("{}", e),
};
}
}
}
}
}
// Send notifs to the parent commenter / poster
match data.parent_id {
Some(parent_id) => {
let parent_comment = Comment::read(&conn, parent_id)?;
if parent_comment.creator_id != user_id {
let parent_user = User_::read(&conn, parent_comment.creator_id)?;
recipient_ids.push(parent_user.id);
if parent_user.send_notifications_to_email {
if let Some(comment_reply_email) = parent_user.email {
let subject = &format!(
"{} - Reply from {}",
Settings::get().hostname,
claims.username
);
let html = &format!(
"<h1>Comment Reply</h1><br><div>{} - {}</div><br><a href={}/inbox>inbox</a>",
claims.username, comment_form.content, hostname
);
match send_email(subject, &comment_reply_email, &parent_user.name, html) {
Ok(_o) => _o,
Err(e) => eprintln!("{}", e),
};
}
}
}
}
// Its a post
None => {
if post.creator_id != user_id {
let parent_user = User_::read(&conn, post.creator_id)?;
recipient_ids.push(parent_user.id);
if parent_user.send_notifications_to_email {
if let Some(post_reply_email) = parent_user.email {
let subject = &format!(
"{} - Reply from {}",
Settings::get().hostname,
claims.username
);
let html = &format!(
"<h1>Post Reply</h1><br><div>{} - {}</div><br><a href={}/inbox>inbox</a>",
claims.username, comment_form.content, hostname
);
match send_email(subject, &post_reply_email, &parent_user.name, html) {
Ok(_o) => _o,
Err(e) => eprintln!("{}", e),
};
}
}
}
}
};
// You like your own comment by default
let like_form = CommentLikeForm {
@ -186,48 +215,22 @@ impl Perform for Oper<CreateComment> {
score: 1,
};
let like = move |conn: &'_ _| CommentLike::like(&conn, &like_form);
if blocking(pool, like).await?.is_err() {
return Err(APIError::err("couldnt_like_comment").into());
}
updated_comment.send_like(&user, &self.client, pool).await?;
let comment_view = blocking(pool, move |conn| {
CommentView::read(&conn, inserted_comment.id, Some(user_id))
})
.await??;
let mut res = CommentResponse {
comment: comment_view,
recipient_ids,
let _inserted_like = match CommentLike::like(&conn, &like_form) {
Ok(like) => like,
Err(_e) => return Err(APIError::err("couldnt_like_comment").into()),
};
if let Some(ws) = websocket_info {
ws.chatserver.do_send(SendComment {
op: UserOperation::CreateComment,
comment: res.clone(),
my_id: ws.id,
});
let comment_view = CommentView::read(&conn, inserted_comment.id, Some(user_id))?;
// strip out the recipient_ids, so that
// users don't get double notifs
res.recipient_ids = Vec::new();
}
Ok(res)
Ok(CommentResponse {
comment: comment_view,
recipient_ids,
})
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<EditComment> {
type Response = CommentResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<CommentResponse, LemmyError> {
impl Perform<CommentResponse> for Oper<EditComment> {
fn perform(&self, conn: &PgConnection) -> Result<CommentResponse, Error> {
let data: &EditComment = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -237,214 +240,127 @@ impl Perform for Oper<EditComment> {
let user_id = claims.id;
let user = blocking(pool, move |conn| User_::read(&conn, user_id)).await??;
let orig_comment = CommentView::read(&conn, data.edit_id, None)?;
let edit_id = data.edit_id;
let orig_comment =
blocking(pool, move |conn| CommentView::read(&conn, edit_id, None)).await??;
let mut editors: Vec<i32> = vec![orig_comment.creator_id];
let mut moderators: Vec<i32> = vec![];
let community_id = orig_comment.community_id;
moderators.append(
&mut blocking(pool, move |conn| {
CommunityModeratorView::for_community(&conn, community_id)
.map(|v| v.into_iter().map(|m| m.user_id).collect())
})
.await??,
);
moderators.append(
&mut blocking(pool, move |conn| {
UserView::admins(conn).map(|v| v.into_iter().map(|a| a.id).collect())
})
.await??,
);
editors.extend(&moderators);
// You are allowed to mark the comment as read even if you're banned.
if data.read.is_none() {
// Verify its the creator or a mod, or an admin
let mut editors: Vec<i32> = vec![data.creator_id];
editors.append(
&mut CommunityModeratorView::for_community(&conn, orig_comment.community_id)?
.into_iter()
.map(|m| m.user_id)
.collect(),
);
editors.append(&mut UserView::admins(&conn)?.into_iter().map(|a| a.id).collect());
if !editors.contains(&user_id) {
return Err(APIError::err("no_comment_edit_allowed").into());
}
// Check for a community ban
let community_id = orig_comment.community_id;
let is_banned =
move |conn: &'_ _| CommunityUserBanView::get(conn, user_id, community_id).is_ok();
if blocking(pool, is_banned).await? {
if CommunityUserBanView::get(&conn, user_id, orig_comment.community_id).is_ok() {
return Err(APIError::err("community_ban").into());
}
// Check for a site ban
if user.banned {
if UserView::read(&conn, user_id)?.banned {
return Err(APIError::err("site_ban").into());
}
} else {
// check that user can mark as read
let parent_id = orig_comment.parent_id;
match parent_id {
Some(pid) => {
let parent_comment =
blocking(pool, move |conn| CommentView::read(&conn, pid, None)).await??;
if user_id != parent_comment.creator_id {
return Err(APIError::err("no_comment_edit_allowed").into());
}
}
None => {
let parent_post_id = orig_comment.post_id;
let parent_post = blocking(pool, move |conn| Post::read(conn, parent_post_id)).await??;
if user_id != parent_post.creator_id {
return Err(APIError::err("no_comment_edit_allowed").into());
}
}
}
}
let content_slurs_removed = remove_slurs(&data.content.to_owned());
let edit_id = data.edit_id;
let read_comment = blocking(pool, move |conn| Comment::read(conn, edit_id)).await??;
let comment_form = {
if data.read.is_none() {
// the ban etc checks should been made and have passed
// the comment can be properly edited
let post_removed = if moderators.contains(&user_id) {
data.removed
} else {
Some(read_comment.removed)
};
CommentForm {
content: content_slurs_removed,
parent_id: read_comment.parent_id,
post_id: read_comment.post_id,
creator_id: read_comment.creator_id,
removed: post_removed.to_owned(),
deleted: data.deleted.to_owned(),
read: Some(read_comment.read),
published: None,
updated: Some(naive_now()),
ap_id: read_comment.ap_id,
local: read_comment.local,
}
let comment_form = CommentForm {
content: content_slurs_removed,
parent_id: data.parent_id,
post_id: data.post_id,
creator_id: data.creator_id,
removed: data.removed.to_owned(),
deleted: data.deleted.to_owned(),
read: data.read.to_owned(),
updated: if data.read.is_some() {
orig_comment.updated
} else {
// the only field that can be updated it the read field
CommentForm {
content: read_comment.content,
parent_id: read_comment.parent_id,
post_id: read_comment.post_id,
creator_id: read_comment.creator_id,
removed: Some(read_comment.removed).to_owned(),
deleted: Some(read_comment.deleted).to_owned(),
read: data.read.to_owned(),
published: None,
updated: orig_comment.updated,
ap_id: read_comment.ap_id,
local: read_comment.local,
}
}
Some(naive_now())
},
};
let edit_id = data.edit_id;
let comment_form2 = comment_form.clone();
let updated_comment = match blocking(pool, move |conn| {
Comment::update(conn, edit_id, &comment_form2)
})
.await?
{
let _updated_comment = match Comment::update(&conn, data.edit_id, &comment_form) {
Ok(comment) => comment,
Err(_e) => return Err(APIError::err("couldnt_update_comment").into()),
};
if data.read.is_none() {
if let Some(deleted) = data.deleted.to_owned() {
if deleted {
updated_comment
.send_delete(&user, &self.client, pool)
.await?;
} else {
updated_comment
.send_undo_delete(&user, &self.client, pool)
.await?;
}
} else if let Some(removed) = data.removed.to_owned() {
if moderators.contains(&user_id) {
if removed {
updated_comment
.send_remove(&user, &self.client, pool)
.await?;
} else {
updated_comment
.send_undo_remove(&user, &self.client, pool)
.await?;
let mut recipient_ids = Vec::new();
// Scan the comment for user mentions, add those rows
let extracted_usernames = extract_usernames(&comment_form.content);
for username_mention in &extracted_usernames {
let mention_user = User_::read_from_name(&conn, (*username_mention).to_string());
if mention_user.is_ok() {
let mention_user_id = mention_user?.id;
// You can't mention yourself
// At some point, make it so you can't tag the parent creator either
// This can cause two notifications, one for reply and the other for mention
if mention_user_id != user_id {
recipient_ids.push(mention_user_id);
let user_mention_form = UserMentionForm {
recipient_id: mention_user_id,
comment_id: data.edit_id,
read: None,
};
// Allow this to fail softly, since comment edits might re-update or replace it
// Let the uniqueness handle this fail
match UserMention::create(&conn, &user_mention_form) {
Ok(_mention) => (),
Err(_e) => eprintln!("{}", &_e),
}
}
} else {
updated_comment
.send_update(&user, &self.client, pool)
.await?;
}
}
// Mod tables
if moderators.contains(&user_id) {
if let Some(removed) = data.removed.to_owned() {
let form = ModRemoveCommentForm {
mod_user_id: user_id,
comment_id: data.edit_id,
removed: Some(removed),
reason: data.reason.to_owned(),
};
blocking(pool, move |conn| ModRemoveComment::create(conn, &form)).await??;
// Add to recipient ids
match data.parent_id {
Some(parent_id) => {
let parent_comment = Comment::read(&conn, parent_id)?;
if parent_comment.creator_id != user_id {
let parent_user = User_::read(&conn, parent_comment.creator_id)?;
recipient_ids.push(parent_user.id);
}
}
None => {
let post = Post::read(&conn, data.post_id)?;
recipient_ids.push(post.creator_id);
}
}
let post_id = data.post_id;
let post = blocking(pool, move |conn| Post::read(conn, post_id)).await??;
// Mod tables
if let Some(removed) = data.removed.to_owned() {
let form = ModRemoveCommentForm {
mod_user_id: user_id,
comment_id: data.edit_id,
removed: Some(removed),
reason: data.reason.to_owned(),
};
ModRemoveComment::create(&conn, &form)?;
}
let mentions = scrape_text_for_mentions(&comment_form.content);
let recipient_ids = send_local_notifs(mentions, updated_comment, user, post, pool).await?;
let comment_view = CommentView::read(&conn, data.edit_id, Some(user_id))?;
let edit_id = data.edit_id;
let comment_view = blocking(pool, move |conn| {
CommentView::read(conn, edit_id, Some(user_id))
})
.await??;
let mut res = CommentResponse {
Ok(CommentResponse {
comment: comment_view,
recipient_ids,
};
if let Some(ws) = websocket_info {
ws.chatserver.do_send(SendComment {
op: UserOperation::EditComment,
comment: res.clone(),
my_id: ws.id,
});
// strip out the recipient_ids, so that
// users don't get double notifs
res.recipient_ids = Vec::new();
}
Ok(res)
})
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<SaveComment> {
type Response = CommentResponse;
async fn perform(
&self,
pool: &DbPool,
_websocket_info: Option<WebsocketInfo>,
) -> Result<CommentResponse, LemmyError> {
impl Perform<CommentResponse> for Oper<SaveComment> {
fn perform(&self, conn: &PgConnection) -> Result<CommentResponse, Error> {
let data: &SaveComment = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -460,22 +376,18 @@ impl Perform for Oper<SaveComment> {
};
if data.save {
let save_comment = move |conn: &'_ _| CommentSaved::save(conn, &comment_saved_form);
if blocking(pool, save_comment).await?.is_err() {
return Err(APIError::err("couldnt_save_comment").into());
}
match CommentSaved::save(&conn, &comment_saved_form) {
Ok(comment) => comment,
Err(_e) => return Err(APIError::err("couldnt_save_comment").into()),
};
} else {
let unsave_comment = move |conn: &'_ _| CommentSaved::unsave(conn, &comment_saved_form);
if blocking(pool, unsave_comment).await?.is_err() {
return Err(APIError::err("couldnt_save_comment").into());
}
match CommentSaved::unsave(&conn, &comment_saved_form) {
Ok(comment) => comment,
Err(_e) => return Err(APIError::err("couldnt_save_comment").into()),
};
}
let comment_id = data.comment_id;
let comment_view = blocking(pool, move |conn| {
CommentView::read(conn, comment_id, Some(user_id))
})
.await??;
let comment_view = CommentView::read(&conn, data.comment_id, Some(user_id))?;
Ok(CommentResponse {
comment: comment_view,
@ -484,15 +396,8 @@ impl Perform for Oper<SaveComment> {
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<CreateCommentLike> {
type Response = CommentResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<CommentResponse, LemmyError> {
impl Perform<CommentResponse> for Oper<CreateCommentLike> {
fn perform(&self, conn: &PgConnection) -> Result<CommentResponse, Error> {
let data: &CreateCommentLike = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -506,40 +411,31 @@ impl Perform for Oper<CreateCommentLike> {
// Don't do a downvote if site has downvotes disabled
if data.score == -1 {
let site = blocking(pool, move |conn| SiteView::read(conn)).await??;
let site = SiteView::read(&conn)?;
if !site.enable_downvotes {
return Err(APIError::err("downvotes_disabled").into());
}
}
// Check for a community ban
let post_id = data.post_id;
let post = blocking(pool, move |conn| Post::read(conn, post_id)).await??;
let community_id = post.community_id;
let is_banned =
move |conn: &'_ _| CommunityUserBanView::get(conn, user_id, community_id).is_ok();
if blocking(pool, is_banned).await? {
let post = Post::read(&conn, data.post_id)?;
if CommunityUserBanView::get(&conn, user_id, post.community_id).is_ok() {
return Err(APIError::err("community_ban").into());
}
// Check for a site ban
let user = blocking(pool, move |conn| User_::read(conn, user_id)).await??;
if user.banned {
if UserView::read(&conn, user_id)?.banned {
return Err(APIError::err("site_ban").into());
}
let comment_id = data.comment_id;
let comment = blocking(pool, move |conn| Comment::read(conn, comment_id)).await??;
let comment = Comment::read(&conn, data.comment_id)?;
// Add to recipient ids
match comment.parent_id {
Some(parent_id) => {
let parent_comment = blocking(pool, move |conn| Comment::read(conn, parent_id)).await??;
let parent_comment = Comment::read(&conn, parent_id)?;
if parent_comment.creator_id != user_id {
let parent_user = blocking(pool, move |conn| {
User_::read(conn, parent_comment.creator_id)
})
.await??;
let parent_user = User_::read(&conn, parent_comment.creator_id)?;
recipient_ids.push(parent_user.id);
}
}
@ -556,64 +452,29 @@ impl Perform for Oper<CreateCommentLike> {
};
// Remove any likes first
let like_form2 = like_form.clone();
blocking(pool, move |conn| CommentLike::remove(conn, &like_form2)).await??;
CommentLike::remove(&conn, &like_form)?;
// Only add the like if the score isnt 0
let do_add = like_form.score != 0 && (like_form.score == 1 || like_form.score == -1);
if do_add {
let like_form2 = like_form.clone();
let like = move |conn: &'_ _| CommentLike::like(conn, &like_form2);
if blocking(pool, like).await?.is_err() {
return Err(APIError::err("couldnt_like_comment").into());
}
if like_form.score == 1 {
comment.send_like(&user, &self.client, pool).await?;
} else if like_form.score == -1 {
comment.send_dislike(&user, &self.client, pool).await?;
}
} else {
comment.send_undo_like(&user, &self.client, pool).await?;
let _inserted_like = match CommentLike::like(&conn, &like_form) {
Ok(like) => like,
Err(_e) => return Err(APIError::err("couldnt_like_comment").into()),
};
}
// Have to refetch the comment to get the current state
let comment_id = data.comment_id;
let liked_comment = blocking(pool, move |conn| {
CommentView::read(conn, comment_id, Some(user_id))
})
.await??;
let liked_comment = CommentView::read(&conn, data.comment_id, Some(user_id))?;
let mut res = CommentResponse {
Ok(CommentResponse {
comment: liked_comment,
recipient_ids,
};
if let Some(ws) = websocket_info {
ws.chatserver.do_send(SendComment {
op: UserOperation::CreateCommentLike,
comment: res.clone(),
my_id: ws.id,
});
// strip out the recipient_ids, so that
// users don't get double notifs
res.recipient_ids = Vec::new();
}
Ok(res)
})
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<GetComments> {
type Response = GetCommentsResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<GetCommentsResponse, LemmyError> {
impl Perform<GetCommentsResponse> for Oper<GetComments> {
fn perform(&self, conn: &PgConnection) -> Result<GetCommentsResponse, Error> {
let data: &GetComments = &self.data;
let user_claims: Option<Claims> = match &data.auth {
@ -632,157 +493,19 @@ impl Perform for Oper<GetComments> {
let type_ = ListingType::from_str(&data.type_)?;
let sort = SortType::from_str(&data.sort)?;
let community_id = data.community_id;
let page = data.page;
let limit = data.limit;
let comments = blocking(pool, move |conn| {
CommentQueryBuilder::create(conn)
.listing_type(type_)
.sort(&sort)
.for_community_id(community_id)
.my_user_id(user_id)
.page(page)
.limit(limit)
.list()
})
.await?;
let comments = match comments {
let comments = match CommentQueryBuilder::create(&conn)
.listing_type(type_)
.sort(&sort)
.for_community_id(data.community_id)
.my_user_id(user_id)
.page(data.page)
.limit(data.limit)
.list()
{
Ok(comments) => comments,
Err(_) => return Err(APIError::err("couldnt_get_comments").into()),
Err(_e) => return Err(APIError::err("couldnt_get_comments").into()),
};
if let Some(ws) = websocket_info {
// You don't need to join the specific community room, bc this is already handled by
// GetCommunity
if data.community_id.is_none() {
if let Some(id) = ws.id {
// 0 is the "all" community
ws.chatserver.do_send(JoinCommunityRoom {
community_id: 0,
id,
});
}
}
}
Ok(GetCommentsResponse { comments })
}
}
pub async fn send_local_notifs(
mentions: Vec<MentionData>,
comment: Comment,
user: User_,
post: Post,
pool: &DbPool,
) -> Result<Vec<i32>, LemmyError> {
let ids = blocking(pool, move |conn| {
do_send_local_notifs(conn, &mentions, &comment, &user, &post)
})
.await?;
Ok(ids)
}
fn do_send_local_notifs(
conn: &diesel::PgConnection,
mentions: &[MentionData],
comment: &Comment,
user: &User_,
post: &Post,
) -> Vec<i32> {
let mut recipient_ids = Vec::new();
let hostname = &format!("https://{}", Settings::get().hostname);
// Send the local mentions
for mention in mentions
.iter()
.filter(|m| m.is_local() && m.name.ne(&user.name))
.collect::<Vec<&MentionData>>()
{
if let Ok(mention_user) = User_::read_from_name(&conn, &mention.name) {
// TODO
// At some point, make it so you can't tag the parent creator either
// This can cause two notifications, one for reply and the other for mention
recipient_ids.push(mention_user.id);
let user_mention_form = UserMentionForm {
recipient_id: mention_user.id,
comment_id: comment.id,
read: None,
};
// Allow this to fail softly, since comment edits might re-update or replace it
// Let the uniqueness handle this fail
match UserMention::create(&conn, &user_mention_form) {
Ok(_mention) => (),
Err(_e) => error!("{}", &_e),
};
// Send an email to those users that have notifications on
if mention_user.send_notifications_to_email {
if let Some(mention_email) = mention_user.email {
let subject = &format!("{} - Mentioned by {}", Settings::get().hostname, user.name,);
let html = &format!(
"<h1>User Mention</h1><br><div>{} - {}</div><br><a href={}/inbox>inbox</a>",
user.name, comment.content, hostname
);
match send_email(subject, &mention_email, &mention_user.name, html) {
Ok(_o) => _o,
Err(e) => error!("{}", e),
};
}
}
}
}
// Send notifs to the parent commenter / poster
match comment.parent_id {
Some(parent_id) => {
if let Ok(parent_comment) = Comment::read(&conn, parent_id) {
if parent_comment.creator_id != user.id {
if let Ok(parent_user) = User_::read(&conn, parent_comment.creator_id) {
recipient_ids.push(parent_user.id);
if parent_user.send_notifications_to_email {
if let Some(comment_reply_email) = parent_user.email {
let subject = &format!("{} - Reply from {}", Settings::get().hostname, user.name,);
let html = &format!(
"<h1>Comment Reply</h1><br><div>{} - {}</div><br><a href={}/inbox>inbox</a>",
user.name, comment.content, hostname
);
match send_email(subject, &comment_reply_email, &parent_user.name, html) {
Ok(_o) => _o,
Err(e) => error!("{}", e),
};
}
}
}
}
}
}
// Its a post
None => {
if post.creator_id != user.id {
if let Ok(parent_user) = User_::read(&conn, post.creator_id) {
recipient_ids.push(parent_user.id);
if parent_user.send_notifications_to_email {
if let Some(post_reply_email) = parent_user.email {
let subject = &format!("{} - Reply from {}", Settings::get().hostname, user.name,);
let html = &format!(
"<h1>Post Reply</h1><br><div>{} - {}</div><br><a href={}/inbox>inbox</a>",
user.name, comment.content, hostname
);
match send_email(subject, &post_reply_email, &parent_user.name, html) {
Ok(_o) => _o,
Err(e) => error!("{}", e),
};
}
}
}
}
}
};
recipient_ids
}

View File

@ -1,40 +1,19 @@
use super::*;
use crate::{
api::{claims::Claims, APIError, Oper, Perform},
apub::ActorType,
blocking,
websocket::{
server::{JoinCommunityRoom, SendCommunityRoomMessage},
UserOperation,
WebsocketInfo,
},
DbPool,
};
use lemmy_db::{naive_now, Bannable, Crud, Followable, Joinable, SortType};
use lemmy_utils::{
generate_actor_keypair,
is_valid_community_name,
make_apub_endpoint,
naive_from_unix,
slur_check,
slurs_vec_to_str,
EndpointType,
};
use serde::{Deserialize, Serialize};
use diesel::PgConnection;
use std::str::FromStr;
#[derive(Serialize, Deserialize)]
pub struct GetCommunity {
id: Option<i32>,
pub name: Option<String>,
name: Option<String>,
auth: Option<String>,
}
#[derive(Serialize, Deserialize)]
pub struct GetCommunityResponse {
pub community: CommunityView,
pub moderators: Vec<CommunityModeratorView>,
pub admins: Vec<UserView>,
moderators: Vec<CommunityModeratorView>,
admins: Vec<UserView>,
pub online: usize,
}
@ -53,17 +32,17 @@ pub struct CommunityResponse {
pub community: CommunityView,
}
#[derive(Serialize, Deserialize, Debug)]
#[derive(Serialize, Deserialize)]
pub struct ListCommunities {
pub sort: String,
pub page: Option<i64>,
pub limit: Option<i64>,
pub auth: Option<String>,
sort: String,
page: Option<i64>,
limit: Option<i64>,
auth: Option<String>,
}
#[derive(Serialize, Deserialize, Debug)]
#[derive(Serialize, Deserialize)]
pub struct ListCommunitiesResponse {
pub communities: Vec<CommunityView>,
communities: Vec<CommunityView>,
}
#[derive(Serialize, Deserialize, Clone)]
@ -76,7 +55,7 @@ pub struct BanFromCommunity {
auth: String,
}
#[derive(Serialize, Deserialize, Clone)]
#[derive(Serialize, Deserialize)]
pub struct BanFromCommunityResponse {
user: UserView,
banned: bool,
@ -90,7 +69,7 @@ pub struct AddModToCommunity {
auth: String,
}
#[derive(Serialize, Deserialize, Clone)]
#[derive(Serialize, Deserialize)]
pub struct AddModToCommunityResponse {
moderators: Vec<CommunityModeratorView>,
}
@ -134,15 +113,8 @@ pub struct TransferCommunity {
auth: String,
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<GetCommunity> {
type Response = GetCommunityResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<GetCommunityResponse, LemmyError> {
impl Perform<GetCommunityResponse> for Oper<GetCommunity> {
fn perform(&self, conn: &PgConnection) -> Result<GetCommunityResponse, Error> {
let data: &GetCommunity = &self.data;
let user_id: Option<i32> = match &data.auth {
@ -156,81 +128,47 @@ impl Perform for Oper<GetCommunity> {
None => None,
};
let name = data.name.to_owned().unwrap_or_else(|| "main".to_string());
let community = match data.id {
Some(id) => blocking(pool, move |conn| Community::read(conn, id)).await??,
None => match blocking(pool, move |conn| Community::read_from_name(conn, &name)).await? {
Ok(community) => community,
Err(_e) => return Err(APIError::err("couldnt_find_community").into()),
},
let community_id = match data.id {
Some(id) => id,
None => {
match Community::read_from_name(
&conn,
data.name.to_owned().unwrap_or_else(|| "main".to_string()),
) {
Ok(community) => community.id,
Err(_e) => return Err(APIError::err("couldnt_find_community").into()),
}
}
};
let community_id = community.id;
let community_view = match blocking(pool, move |conn| {
CommunityView::read(conn, community_id, user_id)
})
.await?
{
let community_view = match CommunityView::read(&conn, community_id, user_id) {
Ok(community) => community,
Err(_e) => return Err(APIError::err("couldnt_find_community").into()),
};
let community_id = community.id;
let moderators: Vec<CommunityModeratorView> = match blocking(pool, move |conn| {
CommunityModeratorView::for_community(conn, community_id)
})
.await?
{
let moderators = match CommunityModeratorView::for_community(&conn, community_id) {
Ok(moderators) => moderators,
Err(_e) => return Err(APIError::err("couldnt_find_community").into()),
};
let site = blocking(pool, move |conn| Site::read(conn, 1)).await??;
let site_creator_id = site.creator_id;
let mut admins = blocking(pool, move |conn| UserView::admins(conn)).await??;
let site_creator_id = Site::read(&conn, 1)?.creator_id;
let mut admins = UserView::admins(&conn)?;
let creator_index = admins.iter().position(|r| r.id == site_creator_id).unwrap();
let creator_user = admins.remove(creator_index);
admins.insert(0, creator_user);
let online = if let Some(ws) = websocket_info {
if let Some(id) = ws.id {
ws.chatserver.do_send(JoinCommunityRoom {
community_id: community.id,
id,
});
}
// TODO
1
// let fut = async {
// ws.chatserver.send(GetCommunityUsersOnline {community_id}).await.unwrap()
// };
// Runtime::new().unwrap().block_on(fut)
} else {
0
};
let res = GetCommunityResponse {
// Return the jwt
Ok(GetCommunityResponse {
community: community_view,
moderators,
admins,
online,
};
// Return the jwt
Ok(res)
online: 0,
})
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<CreateCommunity> {
type Response = CommunityResponse;
async fn perform(
&self,
pool: &DbPool,
_websocket_info: Option<WebsocketInfo>,
) -> Result<CommunityResponse, LemmyError> {
impl Perform<CommunityResponse> for Oper<CreateCommunity> {
fn perform(&self, conn: &PgConnection) -> Result<CommunityResponse, Error> {
let data: &CreateCommunity = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -252,21 +190,14 @@ impl Perform for Oper<CreateCommunity> {
}
}
if !is_valid_community_name(&data.name) {
return Err(APIError::err("invalid_community_name").into());
}
let user_id = claims.id;
// Check for a site ban
let user_view = blocking(pool, move |conn| UserView::read(conn, user_id)).await??;
if user_view.banned {
if UserView::read(&conn, user_id)?.banned {
return Err(APIError::err("site_ban").into());
}
// When you create a community, make sure the user becomes a moderator and a follower
let keypair = generate_actor_keypair()?;
let community_form = CommunityForm {
name: data.name.to_owned(),
title: data.title.to_owned(),
@ -277,44 +208,36 @@ impl Perform for Oper<CreateCommunity> {
deleted: None,
nsfw: data.nsfw,
updated: None,
actor_id: make_apub_endpoint(EndpointType::Community, &data.name).to_string(),
local: true,
private_key: Some(keypair.private_key),
public_key: Some(keypair.public_key),
last_refreshed_at: None,
published: None,
};
let inserted_community =
match blocking(pool, move |conn| Community::create(conn, &community_form)).await? {
Ok(community) => community,
Err(_e) => return Err(APIError::err("community_already_exists").into()),
};
let inserted_community = match Community::create(&conn, &community_form) {
Ok(community) => community,
Err(_e) => return Err(APIError::err("community_already_exists").into()),
};
let community_moderator_form = CommunityModeratorForm {
community_id: inserted_community.id,
user_id,
};
let join = move |conn: &'_ _| CommunityModerator::join(conn, &community_moderator_form);
if blocking(pool, join).await?.is_err() {
return Err(APIError::err("community_moderator_already_exists").into());
}
let _inserted_community_moderator =
match CommunityModerator::join(&conn, &community_moderator_form) {
Ok(user) => user,
Err(_e) => return Err(APIError::err("community_moderator_already_exists").into()),
};
let community_follower_form = CommunityFollowerForm {
community_id: inserted_community.id,
user_id,
};
let follow = move |conn: &'_ _| CommunityFollower::follow(conn, &community_follower_form);
if blocking(pool, follow).await?.is_err() {
return Err(APIError::err("community_follower_already_exists").into());
}
let _inserted_community_follower =
match CommunityFollower::follow(&conn, &community_follower_form) {
Ok(user) => user,
Err(_e) => return Err(APIError::err("community_follower_already_exists").into()),
};
let community_view = blocking(pool, move |conn| {
CommunityView::read(conn, inserted_community.id, Some(user_id))
})
.await??;
let community_view = CommunityView::read(&conn, inserted_community.id, Some(user_id))?;
Ok(CommunityResponse {
community: community_view,
@ -322,15 +245,8 @@ impl Perform for Oper<CreateCommunity> {
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<EditCommunity> {
type Response = CommunityResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<CommunityResponse, LemmyError> {
impl Perform<CommunityResponse> for Oper<EditCommunity> {
fn perform(&self, conn: &PgConnection) -> Result<CommunityResponse, Error> {
let data: &EditCommunity = &self.data;
if let Err(slurs) = slur_check(&data.name) {
@ -352,65 +268,39 @@ impl Perform for Oper<EditCommunity> {
Err(_e) => return Err(APIError::err("not_logged_in").into()),
};
if !is_valid_community_name(&data.name) {
return Err(APIError::err("invalid_community_name").into());
}
let user_id = claims.id;
// Check for a site ban
let user = blocking(pool, move |conn| User_::read(conn, user_id)).await??;
if user.banned {
if UserView::read(&conn, user_id)?.banned {
return Err(APIError::err("site_ban").into());
}
// Verify its a mod
let edit_id = data.edit_id;
let mut editors: Vec<i32> = Vec::new();
editors.append(
&mut blocking(pool, move |conn| {
CommunityModeratorView::for_community(conn, edit_id)
.map(|v| v.into_iter().map(|m| m.user_id).collect())
})
.await??,
);
editors.append(
&mut blocking(pool, move |conn| {
UserView::admins(conn).map(|v| v.into_iter().map(|a| a.id).collect())
})
.await??,
&mut CommunityModeratorView::for_community(&conn, data.edit_id)?
.into_iter()
.map(|m| m.user_id)
.collect(),
);
editors.append(&mut UserView::admins(&conn)?.into_iter().map(|a| a.id).collect());
if !editors.contains(&user_id) {
return Err(APIError::err("no_community_edit_allowed").into());
}
let edit_id = data.edit_id;
let read_community = blocking(pool, move |conn| Community::read(conn, edit_id)).await??;
let community_form = CommunityForm {
name: data.name.to_owned(),
title: data.title.to_owned(),
description: data.description.to_owned(),
category_id: data.category_id.to_owned(),
creator_id: read_community.creator_id,
creator_id: user_id,
removed: data.removed.to_owned(),
deleted: data.deleted.to_owned(),
nsfw: data.nsfw,
updated: Some(naive_now()),
actor_id: read_community.actor_id,
local: read_community.local,
private_key: read_community.private_key,
public_key: read_community.public_key,
last_refreshed_at: None,
published: None,
};
let edit_id = data.edit_id;
let updated_community = match blocking(pool, move |conn| {
Community::update(conn, edit_id, &community_form)
})
.await?
{
let _updated_community = match Community::update(&conn, data.edit_id, &community_form) {
Ok(community) => community,
Err(_e) => return Err(APIError::err("couldnt_update_community").into()),
};
@ -428,68 +318,19 @@ impl Perform for Oper<EditCommunity> {
reason: data.reason.to_owned(),
expires,
};
blocking(pool, move |conn| ModRemoveCommunity::create(conn, &form)).await??;
ModRemoveCommunity::create(&conn, &form)?;
}
if let Some(deleted) = data.deleted.to_owned() {
if deleted {
updated_community
.send_delete(&user, &self.client, pool)
.await?;
} else {
updated_community
.send_undo_delete(&user, &self.client, pool)
.await?;
}
} else if let Some(removed) = data.removed.to_owned() {
if removed {
updated_community
.send_remove(&user, &self.client, pool)
.await?;
} else {
updated_community
.send_undo_remove(&user, &self.client, pool)
.await?;
}
}
let community_view = CommunityView::read(&conn, data.edit_id, Some(user_id))?;
let edit_id = data.edit_id;
let community_view = blocking(pool, move |conn| {
CommunityView::read(conn, edit_id, Some(user_id))
})
.await??;
let res = CommunityResponse {
Ok(CommunityResponse {
community: community_view,
};
if let Some(ws) = websocket_info {
// Strip out the user id and subscribed when sending to others
let mut res_sent = res.clone();
res_sent.community.user_id = None;
res_sent.community.subscribed = None;
ws.chatserver.do_send(SendCommunityRoomMessage {
op: UserOperation::EditCommunity,
response: res_sent,
community_id: data.edit_id,
my_id: ws.id,
});
}
Ok(res)
})
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<ListCommunities> {
type Response = ListCommunitiesResponse;
async fn perform(
&self,
pool: &DbPool,
_websocket_info: Option<WebsocketInfo>,
) -> Result<ListCommunitiesResponse, LemmyError> {
impl Perform<ListCommunitiesResponse> for Oper<ListCommunities> {
fn perform(&self, conn: &PgConnection) -> Result<ListCommunitiesResponse, Error> {
let data: &ListCommunities = &self.data;
let user_claims: Option<Claims> = match &data.auth {
@ -512,33 +353,21 @@ impl Perform for Oper<ListCommunities> {
let sort = SortType::from_str(&data.sort)?;
let page = data.page;
let limit = data.limit;
let communities = blocking(pool, move |conn| {
CommunityQueryBuilder::create(conn)
.sort(&sort)
.for_user(user_id)
.show_nsfw(show_nsfw)
.page(page)
.limit(limit)
.list()
})
.await??;
let communities = CommunityQueryBuilder::create(&conn)
.sort(&sort)
.for_user(user_id)
.show_nsfw(show_nsfw)
.page(data.page)
.limit(data.limit)
.list()?;
// Return the jwt
Ok(ListCommunitiesResponse { communities })
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<FollowCommunity> {
type Response = CommunityResponse;
async fn perform(
&self,
pool: &DbPool,
_websocket_info: Option<WebsocketInfo>,
) -> Result<CommunityResponse, LemmyError> {
impl Perform<CommunityResponse> for Oper<FollowCommunity> {
fn perform(&self, conn: &PgConnection) -> Result<CommunityResponse, Error> {
let data: &FollowCommunity = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -548,53 +377,24 @@ impl Perform for Oper<FollowCommunity> {
let user_id = claims.id;
let community_id = data.community_id;
let community = blocking(pool, move |conn| Community::read(conn, community_id)).await??;
let community_follower_form = CommunityFollowerForm {
community_id: data.community_id,
user_id,
};
if community.local {
if data.follow {
let follow = move |conn: &'_ _| CommunityFollower::follow(conn, &community_follower_form);
if blocking(pool, follow).await?.is_err() {
return Err(APIError::err("community_follower_already_exists").into());
}
} else {
let unfollow =
move |conn: &'_ _| CommunityFollower::unfollow(conn, &community_follower_form);
if blocking(pool, unfollow).await?.is_err() {
return Err(APIError::err("community_follower_already_exists").into());
}
}
if data.follow {
match CommunityFollower::follow(&conn, &community_follower_form) {
Ok(user) => user,
Err(_e) => return Err(APIError::err("community_follower_already_exists").into()),
};
} else {
let user = blocking(pool, move |conn| User_::read(conn, user_id)).await??;
if data.follow {
// Dont actually add to the community followers here, because you need
// to wait for the accept
user
.send_follow(&community.actor_id, &self.client, pool)
.await?;
} else {
user
.send_unfollow(&community.actor_id, &self.client, pool)
.await?;
let unfollow =
move |conn: &'_ _| CommunityFollower::unfollow(conn, &community_follower_form);
if blocking(pool, unfollow).await?.is_err() {
return Err(APIError::err("community_follower_already_exists").into());
}
}
// TODO: this needs to return a "pending" state, until Accept is received from the remote server
match CommunityFollower::ignore(&conn, &community_follower_form) {
Ok(user) => user,
Err(_e) => return Err(APIError::err("community_follower_already_exists").into()),
};
}
let community_id = data.community_id;
let community_view = blocking(pool, move |conn| {
CommunityView::read(conn, community_id, Some(user_id))
})
.await??;
let community_view = CommunityView::read(&conn, data.community_id, Some(user_id))?;
Ok(CommunityResponse {
community: community_view,
@ -602,15 +402,8 @@ impl Perform for Oper<FollowCommunity> {
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<GetFollowedCommunities> {
type Response = GetFollowedCommunitiesResponse;
async fn perform(
&self,
pool: &DbPool,
_websocket_info: Option<WebsocketInfo>,
) -> Result<GetFollowedCommunitiesResponse, LemmyError> {
impl Perform<GetFollowedCommunitiesResponse> for Oper<GetFollowedCommunities> {
fn perform(&self, conn: &PgConnection) -> Result<GetFollowedCommunitiesResponse, Error> {
let data: &GetFollowedCommunities = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -620,29 +413,19 @@ impl Perform for Oper<GetFollowedCommunities> {
let user_id = claims.id;
let communities = match blocking(pool, move |conn| {
CommunityFollowerView::for_user(conn, user_id)
})
.await?
{
Ok(communities) => communities,
_ => return Err(APIError::err("system_err_login").into()),
};
let communities: Vec<CommunityFollowerView> =
match CommunityFollowerView::for_user(&conn, user_id) {
Ok(communities) => communities,
Err(_e) => return Err(APIError::err("system_err_login").into()),
};
// Return the jwt
Ok(GetFollowedCommunitiesResponse { communities })
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<BanFromCommunity> {
type Response = BanFromCommunityResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<BanFromCommunityResponse, LemmyError> {
impl Perform<BanFromCommunityResponse> for Oper<BanFromCommunity> {
fn perform(&self, conn: &PgConnection) -> Result<BanFromCommunityResponse, Error> {
let data: &BanFromCommunity = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -652,43 +435,21 @@ impl Perform for Oper<BanFromCommunity> {
let user_id = claims.id;
let mut community_moderators: Vec<i32> = vec![];
let community_id = data.community_id;
community_moderators.append(
&mut blocking(pool, move |conn| {
CommunityModeratorView::for_community(&conn, community_id)
.map(|v| v.into_iter().map(|m| m.user_id).collect())
})
.await??,
);
community_moderators.append(
&mut blocking(pool, move |conn| {
UserView::admins(conn).map(|v| v.into_iter().map(|a| a.id).collect())
})
.await??,
);
if !community_moderators.contains(&user_id) {
return Err(APIError::err("couldnt_update_community").into());
}
let community_user_ban_form = CommunityUserBanForm {
community_id: data.community_id,
user_id: data.user_id,
};
if data.ban {
let ban = move |conn: &'_ _| CommunityUserBan::ban(conn, &community_user_ban_form);
if blocking(pool, ban).await?.is_err() {
return Err(APIError::err("community_user_already_banned").into());
}
match CommunityUserBan::ban(&conn, &community_user_ban_form) {
Ok(user) => user,
Err(_e) => return Err(APIError::err("community_user_already_banned").into()),
};
} else {
let unban = move |conn: &'_ _| CommunityUserBan::unban(conn, &community_user_ban_form);
if blocking(pool, unban).await?.is_err() {
return Err(APIError::err("community_user_already_banned").into());
}
match CommunityUserBan::unban(&conn, &community_user_ban_form) {
Ok(user) => user,
Err(_e) => return Err(APIError::err("community_user_already_banned").into()),
};
}
// Mod tables
@ -705,38 +466,19 @@ impl Perform for Oper<BanFromCommunity> {
banned: Some(data.ban),
expires,
};
blocking(pool, move |conn| ModBanFromCommunity::create(conn, &form)).await??;
ModBanFromCommunity::create(&conn, &form)?;
let user_id = data.user_id;
let user_view = blocking(pool, move |conn| UserView::read(conn, user_id)).await??;
let user_view = UserView::read(&conn, data.user_id)?;
let res = BanFromCommunityResponse {
Ok(BanFromCommunityResponse {
user: user_view,
banned: data.ban,
};
if let Some(ws) = websocket_info {
ws.chatserver.do_send(SendCommunityRoomMessage {
op: UserOperation::BanFromCommunity,
response: res.clone(),
community_id: data.community_id,
my_id: ws.id,
});
}
Ok(res)
})
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<AddModToCommunity> {
type Response = AddModToCommunityResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<AddModToCommunityResponse, LemmyError> {
impl Perform<AddModToCommunityResponse> for Oper<AddModToCommunity> {
fn perform(&self, conn: &PgConnection) -> Result<AddModToCommunityResponse, Error> {
let data: &AddModToCommunity = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -751,38 +493,16 @@ impl Perform for Oper<AddModToCommunity> {
user_id: data.user_id,
};
let mut community_moderators: Vec<i32> = vec![];
let community_id = data.community_id;
community_moderators.append(
&mut blocking(pool, move |conn| {
CommunityModeratorView::for_community(&conn, community_id)
.map(|v| v.into_iter().map(|m| m.user_id).collect())
})
.await??,
);
community_moderators.append(
&mut blocking(pool, move |conn| {
UserView::admins(conn).map(|v| v.into_iter().map(|a| a.id).collect())
})
.await??,
);
if !community_moderators.contains(&user_id) {
return Err(APIError::err("couldnt_update_community").into());
}
if data.added {
let join = move |conn: &'_ _| CommunityModerator::join(conn, &community_moderator_form);
if blocking(pool, join).await?.is_err() {
return Err(APIError::err("community_moderator_already_exists").into());
}
match CommunityModerator::join(&conn, &community_moderator_form) {
Ok(user) => user,
Err(_e) => return Err(APIError::err("community_moderator_already_exists").into()),
};
} else {
let leave = move |conn: &'_ _| CommunityModerator::leave(conn, &community_moderator_form);
if blocking(pool, leave).await?.is_err() {
return Err(APIError::err("community_moderator_already_exists").into());
}
match CommunityModerator::leave(&conn, &community_moderator_form) {
Ok(user) => user,
Err(_e) => return Err(APIError::err("community_moderator_already_exists").into()),
};
}
// Mod tables
@ -792,38 +512,16 @@ impl Perform for Oper<AddModToCommunity> {
community_id: data.community_id,
removed: Some(!data.added),
};
blocking(pool, move |conn| ModAddCommunity::create(conn, &form)).await??;
ModAddCommunity::create(&conn, &form)?;
let community_id = data.community_id;
let moderators = blocking(pool, move |conn| {
CommunityModeratorView::for_community(conn, community_id)
})
.await??;
let moderators = CommunityModeratorView::for_community(&conn, data.community_id)?;
let res = AddModToCommunityResponse { moderators };
if let Some(ws) = websocket_info {
ws.chatserver.do_send(SendCommunityRoomMessage {
op: UserOperation::AddModToCommunity,
response: res.clone(),
community_id: data.community_id,
my_id: ws.id,
});
}
Ok(res)
Ok(AddModToCommunityResponse { moderators })
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<TransferCommunity> {
type Response = GetCommunityResponse;
async fn perform(
&self,
pool: &DbPool,
_websocket_info: Option<WebsocketInfo>,
) -> Result<GetCommunityResponse, LemmyError> {
impl Perform<GetCommunityResponse> for Oper<TransferCommunity> {
fn perform(&self, conn: &PgConnection) -> Result<GetCommunityResponse, Error> {
let data: &TransferCommunity = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -833,14 +531,10 @@ impl Perform for Oper<TransferCommunity> {
let user_id = claims.id;
let community_id = data.community_id;
let read_community = blocking(pool, move |conn| Community::read(conn, community_id)).await??;
let site_creator_id =
blocking(pool, move |conn| Site::read(conn, 1).map(|s| s.creator_id)).await??;
let mut admins = blocking(pool, move |conn| UserView::admins(conn)).await??;
let read_community = Community::read(&conn, data.community_id)?;
let site_creator_id = Site::read(&conn, 1)?.creator_id;
let mut admins = UserView::admins(&conn)?;
let creator_index = admins.iter().position(|r| r.id == site_creator_id).unwrap();
let creator_user = admins.remove(creator_index);
admins.insert(0, creator_user);
@ -855,31 +549,20 @@ impl Perform for Oper<TransferCommunity> {
title: read_community.title,
description: read_community.description,
category_id: read_community.category_id,
creator_id: data.user_id, // This makes the new user the community creator
creator_id: data.user_id,
removed: None,
deleted: None,
nsfw: read_community.nsfw,
updated: Some(naive_now()),
actor_id: read_community.actor_id,
local: read_community.local,
private_key: read_community.private_key,
public_key: read_community.public_key,
last_refreshed_at: None,
published: None,
};
let community_id = data.community_id;
let update = move |conn: &'_ _| Community::update(conn, community_id, &community_form);
if blocking(pool, update).await?.is_err() {
return Err(APIError::err("couldnt_update_community").into());
let _updated_community = match Community::update(&conn, data.community_id, &community_form) {
Ok(community) => community,
Err(_e) => return Err(APIError::err("couldnt_update_community").into()),
};
// You also have to re-do the community_moderator table, reordering it.
let community_id = data.community_id;
let mut community_mods = blocking(pool, move |conn| {
CommunityModeratorView::for_community(conn, community_id)
})
.await??;
let mut community_mods = CommunityModeratorView::for_community(&conn, data.community_id)?;
let creator_index = community_mods
.iter()
.position(|r| r.user_id == data.user_id)
@ -887,23 +570,19 @@ impl Perform for Oper<TransferCommunity> {
let creator_user = community_mods.remove(creator_index);
community_mods.insert(0, creator_user);
let community_id = data.community_id;
blocking(pool, move |conn| {
CommunityModerator::delete_for_community(conn, community_id)
})
.await??;
CommunityModerator::delete_for_community(&conn, data.community_id)?;
// TODO: this should probably be a bulk operation
for cmod in &community_mods {
let community_moderator_form = CommunityModeratorForm {
community_id: cmod.community_id,
user_id: cmod.user_id,
};
let join = move |conn: &'_ _| CommunityModerator::join(conn, &community_moderator_form);
if blocking(pool, join).await?.is_err() {
return Err(APIError::err("community_moderator_already_exists").into());
}
let _inserted_community_moderator =
match CommunityModerator::join(&conn, &community_moderator_form) {
Ok(user) => user,
Err(_e) => return Err(APIError::err("community_moderator_already_exists").into()),
};
}
// Mod tables
@ -913,24 +592,14 @@ impl Perform for Oper<TransferCommunity> {
community_id: data.community_id,
removed: Some(false),
};
blocking(pool, move |conn| ModAddCommunity::create(conn, &form)).await??;
ModAddCommunity::create(&conn, &form)?;
let community_id = data.community_id;
let community_view = match blocking(pool, move |conn| {
CommunityView::read(conn, community_id, Some(user_id))
})
.await?
{
let community_view = match CommunityView::read(&conn, data.community_id, Some(user_id)) {
Ok(community) => community,
Err(_e) => return Err(APIError::err("couldnt_find_community").into()),
};
let community_id = data.community_id;
let moderators = match blocking(pool, move |conn| {
CommunityModeratorView::for_community(conn, community_id)
})
.await?
{
let moderators = match CommunityModeratorView::for_community(&conn, data.community_id) {
Ok(moderators) => moderators,
Err(_e) => return Err(APIError::err("couldnt_find_community").into()),
};

View File

@ -1,8 +1,29 @@
use crate::{websocket::WebsocketInfo, DbPool, LemmyError};
use actix_web::client::Client;
use lemmy_db::{community::*, community_view::*, moderator::*, site::*, user::*, user_view::*};
use crate::db::category::*;
use crate::db::comment::*;
use crate::db::comment_view::*;
use crate::db::community::*;
use crate::db::community_view::*;
use crate::db::moderator::*;
use crate::db::moderator_views::*;
use crate::db::password_reset_request::*;
use crate::db::post::*;
use crate::db::post_view::*;
use crate::db::private_message::*;
use crate::db::private_message_view::*;
use crate::db::site::*;
use crate::db::site_view::*;
use crate::db::user::*;
use crate::db::user_mention::*;
use crate::db::user_mention_view::*;
use crate::db::user_view::*;
use crate::db::*;
use crate::{
extract_usernames, naive_from_unix, naive_now, remove_slurs, slur_check, slurs_vec_to_str,
};
use diesel::PgConnection;
use failure::Error;
use serde::{Deserialize, Serialize};
pub mod claims;
pub mod comment;
pub mod community;
pub mod post;
@ -25,22 +46,16 @@ impl APIError {
pub struct Oper<T> {
data: T,
client: Client,
}
impl<Data> Oper<Data> {
pub fn new(data: Data, client: Client) -> Oper<Data> {
Oper { data, client }
impl<T> Oper<T> {
pub fn new(data: T) -> Oper<T> {
Oper { data }
}
}
#[async_trait::async_trait(?Send)]
pub trait Perform {
type Response: serde::ser::Serialize + Send;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<Self::Response, LemmyError>;
pub trait Perform<T> {
fn perform(&self, conn: &PgConnection) -> Result<T, Error>
where
T: Sized;
}

View File

@ -1,44 +1,8 @@
use crate::{
api::{claims::Claims, APIError, Oper, Perform},
apub::{ApubLikeableType, ApubObjectType},
blocking,
fetch_iframely_and_pictrs_data,
websocket::{
server::{JoinCommunityRoom, JoinPostRoom, SendPost},
UserOperation,
WebsocketInfo,
},
DbPool,
LemmyError,
};
use lemmy_db::{
comment_view::*,
community_view::*,
moderator::*,
naive_now,
post::*,
post_view::*,
site::*,
site_view::*,
user::*,
user_view::*,
Crud,
Likeable,
ListingType,
Saveable,
SortType,
};
use lemmy_utils::{
is_valid_post_title,
make_apub_endpoint,
slur_check,
slurs_vec_to_str,
EndpointType,
};
use serde::{Deserialize, Serialize};
use super::*;
use diesel::PgConnection;
use std::str::FromStr;
#[derive(Serialize, Deserialize, Debug)]
#[derive(Serialize, Deserialize)]
pub struct CreatePost {
name: String,
url: Option<String>,
@ -69,20 +33,19 @@ pub struct GetPostResponse {
pub online: usize,
}
#[derive(Serialize, Deserialize, Debug)]
#[derive(Serialize, Deserialize)]
pub struct GetPosts {
type_: String,
sort: String,
page: Option<i64>,
limit: Option<i64>,
pub community_id: Option<i32>,
pub community_name: Option<String>,
auth: Option<String>,
}
#[derive(Serialize, Deserialize, Debug)]
#[derive(Serialize, Deserialize)]
pub struct GetPostsResponse {
pub posts: Vec<PostView>,
posts: Vec<PostView>,
}
#[derive(Serialize, Deserialize)]
@ -116,15 +79,8 @@ pub struct SavePost {
auth: String,
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<CreatePost> {
type Response = PostResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<PostResponse, LemmyError> {
impl Perform<PostResponse> for Oper<CreatePost> {
fn perform(&self, conn: &PgConnection) -> Result<PostResponse, Error> {
let data: &CreatePost = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -142,32 +98,20 @@ impl Perform for Oper<CreatePost> {
}
}
if !is_valid_post_title(&data.name) {
return Err(APIError::err("invalid_post_title").into());
}
let user_id = claims.id;
// Check for a community ban
let community_id = data.community_id;
let is_banned =
move |conn: &'_ _| CommunityUserBanView::get(conn, user_id, community_id).is_ok();
if blocking(pool, is_banned).await? {
if CommunityUserBanView::get(&conn, user_id, data.community_id).is_ok() {
return Err(APIError::err("community_ban").into());
}
// Check for a site ban
let user = blocking(pool, move |conn| User_::read(conn, user_id)).await??;
if user.banned {
if UserView::read(&conn, user_id)?.banned {
return Err(APIError::err("site_ban").into());
}
// Fetch Iframely and pictrs cached image
let (iframely_title, iframely_description, iframely_html, pictrs_thumbnail) =
fetch_iframely_and_pictrs_data(&self.client, data.url.to_owned()).await;
let post_form = PostForm {
name: data.name.trim().to_owned(),
name: data.name.to_owned(),
url: data.url.to_owned(),
body: data.body.to_owned(),
community_id: data.community_id,
@ -178,16 +122,9 @@ impl Perform for Oper<CreatePost> {
locked: None,
stickied: None,
updated: None,
embed_title: iframely_title,
embed_description: iframely_description,
embed_html: iframely_html,
thumbnail_url: pictrs_thumbnail,
ap_id: "http://fake.com".into(),
local: true,
published: None,
};
let inserted_post = match blocking(pool, move |conn| Post::create(conn, &post_form)).await? {
let inserted_post = match Post::create(&conn, &post_form) {
Ok(post) => post,
Err(e) => {
let err_type = if e.to_string() == "value too long for type character varying(200)" {
@ -200,20 +137,6 @@ impl Perform for Oper<CreatePost> {
}
};
let inserted_post_id = inserted_post.id;
let updated_post = match blocking(pool, move |conn| {
let apub_id =
make_apub_endpoint(EndpointType::Post, &inserted_post_id.to_string()).to_string();
Post::update_ap_id(conn, inserted_post_id, apub_id)
})
.await?
{
Ok(post) => post,
Err(_e) => return Err(APIError::err("couldnt_create_post").into()),
};
updated_post.send_create(&user, &self.client, pool).await?;
// They like their own post by default
let like_form = PostLikeForm {
post_id: inserted_post.id,
@ -221,47 +144,24 @@ impl Perform for Oper<CreatePost> {
score: 1,
};
let like = move |conn: &'_ _| PostLike::like(conn, &like_form);
if blocking(pool, like).await?.is_err() {
return Err(APIError::err("couldnt_like_post").into());
}
updated_post.send_like(&user, &self.client, pool).await?;
// Only add the like if the score isnt 0
let _inserted_like = match PostLike::like(&conn, &like_form) {
Ok(like) => like,
Err(_e) => return Err(APIError::err("couldnt_like_post").into()),
};
// Refetch the view
let inserted_post_id = inserted_post.id;
let post_view = match blocking(pool, move |conn| {
PostView::read(conn, inserted_post_id, Some(user_id))
})
.await?
{
let post_view = match PostView::read(&conn, inserted_post.id, Some(user_id)) {
Ok(post) => post,
Err(_e) => return Err(APIError::err("couldnt_find_post").into()),
};
let res = PostResponse { post: post_view };
if let Some(ws) = websocket_info {
ws.chatserver.do_send(SendPost {
op: UserOperation::CreatePost,
post: res.clone(),
my_id: ws.id,
});
}
Ok(res)
Ok(PostResponse { post: post_view })
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<GetPost> {
type Response = GetPostResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<GetPostResponse, LemmyError> {
impl Perform<GetPostResponse> for Oper<GetPost> {
fn perform(&self, conn: &PgConnection) -> Result<GetPostResponse, Error> {
let data: &GetPost = &self.data;
let user_id: Option<i32> = match &data.auth {
@ -275,60 +175,27 @@ impl Perform for Oper<GetPost> {
None => None,
};
let id = data.id;
let post_view = match blocking(pool, move |conn| PostView::read(conn, id, user_id)).await? {
let post_view = match PostView::read(&conn, data.id, user_id) {
Ok(post) => post,
Err(_e) => return Err(APIError::err("couldnt_find_post").into()),
};
let id = data.id;
let comments = blocking(pool, move |conn| {
CommentQueryBuilder::create(conn)
.for_post_id(id)
.my_user_id(user_id)
.limit(9999)
.list()
})
.await??;
let comments = CommentQueryBuilder::create(&conn)
.for_post_id(data.id)
.my_user_id(user_id)
.limit(9999)
.list()?;
let community_id = post_view.community_id;
let community = blocking(pool, move |conn| {
CommunityView::read(conn, community_id, user_id)
})
.await??;
let community = CommunityView::read(&conn, post_view.community_id, user_id)?;
let community_id = post_view.community_id;
let moderators = blocking(pool, move |conn| {
CommunityModeratorView::for_community(conn, community_id)
})
.await??;
let moderators = CommunityModeratorView::for_community(&conn, post_view.community_id)?;
let site_creator_id =
blocking(pool, move |conn| Site::read(conn, 1).map(|s| s.creator_id)).await??;
let mut admins = blocking(pool, move |conn| UserView::admins(conn)).await??;
let site_creator_id = Site::read(&conn, 1)?.creator_id;
let mut admins = UserView::admins(&conn)?;
let creator_index = admins.iter().position(|r| r.id == site_creator_id).unwrap();
let creator_user = admins.remove(creator_index);
admins.insert(0, creator_user);
let online = if let Some(ws) = websocket_info {
if let Some(id) = ws.id {
ws.chatserver.do_send(JoinPostRoom {
post_id: data.id,
id,
});
}
// TODO
1
// let fut = async {
// ws.chatserver.send(GetPostUsersOnline {post_id: data.id}).await.unwrap()
// };
// Runtime::new().unwrap().block_on(fut)
} else {
0
};
// Return the jwt
Ok(GetPostResponse {
post: post_view,
@ -336,20 +203,13 @@ impl Perform for Oper<GetPost> {
community,
moderators,
admins,
online,
online: 0,
})
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<GetPosts> {
type Response = GetPostsResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<GetPostsResponse, LemmyError> {
impl Perform<GetPostsResponse> for Oper<GetPosts> {
fn perform(&self, conn: &PgConnection) -> Result<GetPostsResponse, Error> {
let data: &GetPosts = &self.data;
let user_claims: Option<Claims> = match &data.auth {
@ -373,55 +233,26 @@ impl Perform for Oper<GetPosts> {
let type_ = ListingType::from_str(&data.type_)?;
let sort = SortType::from_str(&data.sort)?;
let page = data.page;
let limit = data.limit;
let community_id = data.community_id;
let community_name = data.community_name.to_owned();
let posts = match blocking(pool, move |conn| {
PostQueryBuilder::create(conn)
.listing_type(type_)
.sort(&sort)
.show_nsfw(show_nsfw)
.for_community_id(community_id)
.for_community_name(community_name)
.my_user_id(user_id)
.page(page)
.limit(limit)
.list()
})
.await?
let posts = match PostQueryBuilder::create(&conn)
.listing_type(type_)
.sort(&sort)
.show_nsfw(show_nsfw)
.for_community_id(data.community_id)
.my_user_id(user_id)
.page(data.page)
.limit(data.limit)
.list()
{
Ok(posts) => posts,
Err(_e) => return Err(APIError::err("couldnt_get_posts").into()),
};
if let Some(ws) = websocket_info {
// You don't need to join the specific community room, bc this is already handled by
// GetCommunity
if data.community_id.is_none() {
if let Some(id) = ws.id {
// 0 is the "all" community
ws.chatserver.do_send(JoinCommunityRoom {
community_id: 0,
id,
});
}
}
}
Ok(GetPostsResponse { posts })
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<CreatePostLike> {
type Response = PostResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<PostResponse, LemmyError> {
impl Perform<PostResponse> for Oper<CreatePostLike> {
fn perform(&self, conn: &PgConnection) -> Result<PostResponse, Error> {
let data: &CreatePostLike = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -433,26 +264,20 @@ impl Perform for Oper<CreatePostLike> {
// Don't do a downvote if site has downvotes disabled
if data.score == -1 {
let site = blocking(pool, move |conn| SiteView::read(conn)).await??;
let site = SiteView::read(&conn)?;
if !site.enable_downvotes {
return Err(APIError::err("downvotes_disabled").into());
}
}
// Check for a community ban
let post_id = data.post_id;
let post = blocking(pool, move |conn| Post::read(conn, post_id)).await??;
let community_id = post.community_id;
let is_banned =
move |conn: &'_ _| CommunityUserBanView::get(conn, user_id, community_id).is_ok();
if blocking(pool, is_banned).await? {
let post = Post::read(&conn, data.post_id)?;
if CommunityUserBanView::get(&conn, user_id, post.community_id).is_ok() {
return Err(APIError::err("community_ban").into());
}
// Check for a site ban
let user = blocking(pool, move |conn| User_::read(conn, user_id)).await??;
if user.banned {
if UserView::read(&conn, user_id)?.banned {
return Err(APIError::err("site_ban").into());
}
@ -463,60 +288,29 @@ impl Perform for Oper<CreatePostLike> {
};
// Remove any likes first
let like_form2 = like_form.clone();
blocking(pool, move |conn| PostLike::remove(conn, &like_form2)).await??;
PostLike::remove(&conn, &like_form)?;
// Only add the like if the score isnt 0
let do_add = like_form.score != 0 && (like_form.score == 1 || like_form.score == -1);
if do_add {
let like_form2 = like_form.clone();
let like = move |conn: &'_ _| PostLike::like(conn, &like_form2);
if blocking(pool, like).await?.is_err() {
return Err(APIError::err("couldnt_like_post").into());
}
if like_form.score == 1 {
post.send_like(&user, &self.client, pool).await?;
} else if like_form.score == -1 {
post.send_dislike(&user, &self.client, pool).await?;
}
} else {
post.send_undo_like(&user, &self.client, pool).await?;
let _inserted_like = match PostLike::like(&conn, &like_form) {
Ok(like) => like,
Err(_e) => return Err(APIError::err("couldnt_like_post").into()),
};
}
let post_id = data.post_id;
let post_view = match blocking(pool, move |conn| {
PostView::read(conn, post_id, Some(user_id))
})
.await?
{
let post_view = match PostView::read(&conn, data.post_id, Some(user_id)) {
Ok(post) => post,
Err(_e) => return Err(APIError::err("couldnt_find_post").into()),
};
let res = PostResponse { post: post_view };
if let Some(ws) = websocket_info {
ws.chatserver.do_send(SendPost {
op: UserOperation::CreatePostLike,
post: res.clone(),
my_id: ws.id,
});
}
Ok(res)
// just output the score
Ok(PostResponse { post: post_view })
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<EditPost> {
type Response = PostResponse;
async fn perform(
&self,
pool: &DbPool,
websocket_info: Option<WebsocketInfo>,
) -> Result<PostResponse, LemmyError> {
impl Perform<PostResponse> for Oper<EditPost> {
fn perform(&self, conn: &PgConnection) -> Result<PostResponse, Error> {
let data: &EditPost = &self.data;
if let Err(slurs) = slur_check(&data.name) {
@ -529,10 +323,6 @@ impl Perform for Oper<EditPost> {
}
}
if !is_valid_post_title(&data.name) {
return Err(APIError::err("invalid_post_title").into());
}
let claims = match Claims::decode(&data.auth) {
Ok(claims) => claims.claims,
Err(_e) => return Err(APIError::err("not_logged_in").into()),
@ -540,102 +330,44 @@ impl Perform for Oper<EditPost> {
let user_id = claims.id;
let edit_id = data.edit_id;
let read_post = blocking(pool, move |conn| Post::read(conn, edit_id)).await??;
// Verify its the creator or a mod or admin
let community_id = read_post.community_id;
let mut editors: Vec<i32> = vec![read_post.creator_id];
let mut moderators: Vec<i32> = vec![];
moderators.append(
&mut blocking(pool, move |conn| {
CommunityModeratorView::for_community(conn, community_id)
.map(|v| v.into_iter().map(|m| m.user_id).collect())
})
.await??,
let mut editors: Vec<i32> = vec![data.creator_id];
editors.append(
&mut CommunityModeratorView::for_community(&conn, data.community_id)?
.into_iter()
.map(|m| m.user_id)
.collect(),
);
moderators.append(
&mut blocking(pool, move |conn| {
UserView::admins(conn).map(|v| v.into_iter().map(|a| a.id).collect())
})
.await??,
);
editors.extend(&moderators);
editors.append(&mut UserView::admins(&conn)?.into_iter().map(|a| a.id).collect());
if !editors.contains(&user_id) {
return Err(APIError::err("no_post_edit_allowed").into());
}
// Check for a community ban
let community_id = read_post.community_id;
let is_banned =
move |conn: &'_ _| CommunityUserBanView::get(conn, user_id, community_id).is_ok();
if blocking(pool, is_banned).await? {
if CommunityUserBanView::get(&conn, user_id, data.community_id).is_ok() {
return Err(APIError::err("community_ban").into());
}
// Check for a site ban
let user = blocking(pool, move |conn| User_::read(conn, user_id)).await??;
if user.banned {
if UserView::read(&conn, user_id)?.banned {
return Err(APIError::err("site_ban").into());
}
// Fetch Iframely and Pictrs cached image
let (iframely_title, iframely_description, iframely_html, pictrs_thumbnail) =
fetch_iframely_and_pictrs_data(&self.client, data.url.to_owned()).await;
let post_form = {
// only modify some properties if they are a moderator
if moderators.contains(&user_id) {
PostForm {
name: data.name.trim().to_owned(),
url: data.url.to_owned(),
body: data.body.to_owned(),
creator_id: read_post.creator_id.to_owned(),
community_id: read_post.community_id,
removed: data.removed.to_owned(),
deleted: data.deleted.to_owned(),
nsfw: data.nsfw,
locked: data.locked.to_owned(),
stickied: data.stickied.to_owned(),
updated: Some(naive_now()),
embed_title: iframely_title,
embed_description: iframely_description,
embed_html: iframely_html,
thumbnail_url: pictrs_thumbnail,
ap_id: read_post.ap_id,
local: read_post.local,
published: None,
}
} else {
PostForm {
name: read_post.name.trim().to_owned(),
url: data.url.to_owned(),
body: data.body.to_owned(),
creator_id: read_post.creator_id.to_owned(),
community_id: read_post.community_id,
removed: Some(read_post.removed),
deleted: data.deleted.to_owned(),
nsfw: data.nsfw,
locked: Some(read_post.locked),
stickied: Some(read_post.stickied),
updated: Some(naive_now()),
embed_title: iframely_title,
embed_description: iframely_description,
embed_html: iframely_html,
thumbnail_url: pictrs_thumbnail,
ap_id: read_post.ap_id,
local: read_post.local,
published: None,
}
}
let post_form = PostForm {
name: data.name.to_owned(),
url: data.url.to_owned(),
body: data.body.to_owned(),
creator_id: data.creator_id.to_owned(),
community_id: data.community_id,
removed: data.removed.to_owned(),
deleted: data.deleted.to_owned(),
nsfw: data.nsfw,
locked: data.locked.to_owned(),
stickied: data.stickied.to_owned(),
updated: Some(naive_now()),
};
let edit_id = data.edit_id;
let res = blocking(pool, move |conn| Post::update(conn, edit_id, &post_form)).await?;
let updated_post: Post = match res {
let _updated_post = match Post::update(&conn, data.edit_id, &post_form) {
Ok(post) => post,
Err(e) => {
let err_type = if e.to_string() == "value too long for type character varying(200)" {
@ -648,88 +380,43 @@ impl Perform for Oper<EditPost> {
}
};
if moderators.contains(&user_id) {
// Mod tables
if let Some(removed) = data.removed.to_owned() {
let form = ModRemovePostForm {
mod_user_id: user_id,
post_id: data.edit_id,
removed: Some(removed),
reason: data.reason.to_owned(),
};
blocking(pool, move |conn| ModRemovePost::create(conn, &form)).await??;
}
if let Some(locked) = data.locked.to_owned() {
let form = ModLockPostForm {
mod_user_id: user_id,
post_id: data.edit_id,
locked: Some(locked),
};
blocking(pool, move |conn| ModLockPost::create(conn, &form)).await??;
}
if let Some(stickied) = data.stickied.to_owned() {
let form = ModStickyPostForm {
mod_user_id: user_id,
post_id: data.edit_id,
stickied: Some(stickied),
};
blocking(pool, move |conn| ModStickyPost::create(conn, &form)).await??;
}
// Mod tables
if let Some(removed) = data.removed.to_owned() {
let form = ModRemovePostForm {
mod_user_id: user_id,
post_id: data.edit_id,
removed: Some(removed),
reason: data.reason.to_owned(),
};
ModRemovePost::create(&conn, &form)?;
}
if let Some(deleted) = data.deleted.to_owned() {
if deleted {
updated_post.send_delete(&user, &self.client, pool).await?;
} else {
updated_post
.send_undo_delete(&user, &self.client, pool)
.await?;
}
} else if let Some(removed) = data.removed.to_owned() {
if moderators.contains(&user_id) {
if removed {
updated_post.send_remove(&user, &self.client, pool).await?;
} else {
updated_post
.send_undo_remove(&user, &self.client, pool)
.await?;
}
}
} else {
updated_post.send_update(&user, &self.client, pool).await?;
if let Some(locked) = data.locked.to_owned() {
let form = ModLockPostForm {
mod_user_id: user_id,
post_id: data.edit_id,
locked: Some(locked),
};
ModLockPost::create(&conn, &form)?;
}
let edit_id = data.edit_id;
let post_view = blocking(pool, move |conn| {
PostView::read(conn, edit_id, Some(user_id))
})
.await??;
let res = PostResponse { post: post_view };
if let Some(ws) = websocket_info {
ws.chatserver.do_send(SendPost {
op: UserOperation::EditPost,
post: res.clone(),
my_id: ws.id,
});
if let Some(stickied) = data.stickied.to_owned() {
let form = ModStickyPostForm {
mod_user_id: user_id,
post_id: data.edit_id,
stickied: Some(stickied),
};
ModStickyPost::create(&conn, &form)?;
}
Ok(res)
let post_view = PostView::read(&conn, data.edit_id, Some(user_id))?;
Ok(PostResponse { post: post_view })
}
}
#[async_trait::async_trait(?Send)]
impl Perform for Oper<SavePost> {
type Response = PostResponse;
async fn perform(
&self,
pool: &DbPool,
_websocket_info: Option<WebsocketInfo>,
) -> Result<PostResponse, LemmyError> {
impl Perform<PostResponse> for Oper<SavePost> {
fn perform(&self, conn: &PgConnection) -> Result<PostResponse, Error> {
let data: &SavePost = &self.data;
let claims = match Claims::decode(&data.auth) {
@ -745,22 +432,18 @@ impl Perform for Oper<SavePost> {
};
if data.save {
let save = move |conn: &'_ _| PostSaved::save(conn, &post_saved_form);
if blocking(pool, save).await?.is_err() {
return Err(APIError::err("couldnt_save_post").into());
}
match PostSaved::save(&conn, &post_saved_form) {
Ok(post) => post,
Err(_e) => return Err(APIError::err("couldnt_save_post").into()),
};
} else {
let unsave = move |conn: &'_ _| PostSaved::unsave(conn, &post_saved_form);
if blocking(pool, unsave).await?.is_err() {
return Err(APIError::err("couldnt_save_post").into());
}
match PostSaved::unsave(&conn, &post_saved_form) {
Ok(post) => post,
Err(_e) => return Err(APIError::err("couldnt_save_post").into()),
};
}
let post_id = data.post_id;
let post_view = blocking(pool, move |conn| {
PostView::read(conn, post_id, Some(user_id))
})
.await??;
let post_view = PostView::read(&conn, data.post_id, Some(user_id))?;
Ok(PostResponse { post: post_view })
}

Some files were not shown because too many files have changed in this diff Show More