This commit is contained in:
Bento Silveira 2023-07-23 20:20:50 -03:00 committed by GitHub
commit b545ddd758
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
20 changed files with 377 additions and 310 deletions

2
.gitignore vendored
View File

@ -1,3 +1 @@
config/
data/
/docker-compose.yml

View File

@ -1,3 +1,70 @@
# Piped-Docker
See https://piped-docs.kavin.rocks/docs/self-hosting/#docker-compose-caddy-aio-script
## General notes
### Requirements
To Self-host Piped you're going to need the following resources:
- Three DNS entries, one for each of the three modules: Frontend, Backend (API) and Youtube Proxy.
- An SSL certificate for HTTPS. An exemple is supplied, but you should create your own or get one from Let's Encrypt
- A container manager - Docker or Podman - with the corresponding \*-composer.
For an instance serving only a private network, you most likely going to use a self-signed certificate, since Let's Encrypt needs access to the server on port 80 to validate that you actually owns it.
### Note to selfhosters running Proxmox
If you're going to selfhost Piped on an LXC container created by Proxmox, note that They're perfectly capable of running both Docker an Podman containers. This is called nesting, the act of running containers inside containers.
There are one caveat, tho. It has to do with how services are started on LXC. Those containers normaly don't have a non-root user, so you login directly as root with SSH. Some people might be tempted to create a normal user and then use `sudo` to become root. This will cause you a lot of pain, because by doing that, you won't have a d-bus session running, d-bus is started as user unit by Systemd, but this doesn't run when you `sudo` to the user, only when you login directly as the user. I haven't tested this with Docker, but Podman breaks a little in this scenario, so if you're running Podman inside an LXC container, SSH as root from the beginning.
WAIT!!! Can't I run Podman without being root ? Well, the Nginx reverse proxy the Piped uses to distribute requests to frontend, backend or ytproxy listens on ports 80 and 443, so you need to be root in order to open those. If you want to run rootless, you're gonna have to tinker a little, but you'll be on uncharted waters, sorry.
## Configuration
### Creating Self-signed certificate
To create your own certificate, follow the instructions on this [DigitalOcean tutorial](https://www.digitalocean.com/community/tutorials/openssl-essentials-working-with-ssl-certificates-private-keys-and-csrs#generating-ssl-certificates), placing the files on the `config/` directory, replacing `piped.key` and `piped.crt` with the ones you created. To save you some time, you can use this command:
cd config/
openssl req -newkey rsa:2048 -nodes -keyout piped.key -x509 -days 365 -out piped.crt
Answer all the question with appropriate values, the **only important field** that you should pay attention to is "**Common Name (e.g. server FQDN or YOUR name) []**", this should be "*.yourdomain.tld", meaning this certificate will server for all three hosts needed for Piped.
### Configuring Piped
All configurations should preferably be done using environment variables. All of them are listed on the [[configuration.env]] file.
The most important to set up are the FQDN (Fully Qualified Domain Names) of the three services. These names should be configured on the variables BACKEND_HOSTNAME, FRONTEND_HOSTNAME and PROXY_HOSTNAME **without** "https:\/\/", slashes or anything other than the FQDN. The URLs **with** "https:\/\/" should be configured in the variables FRONTEND_URL, API_URL and PROXY_PART.
There are other settings that you can change in the file too, such as support for Captcha, registration, etc., just look for them on the config file.
### Configuring Postgres
Piped uses PostgreSQL. It is the only DB supported and it's included in the composer file. If you want to use an external Postgre instead, put the relevant information on the appropriate variables and comment the `postgres` service on the composer file. If you decide to use the included DB, these variables will be used both to create the database and to configure the Hibernate library used by the backend.
## Running
After you finish creating the certificate and setting up the environment variables, run the project with one of the following commands:
- Docker
docker-compose up -d
- Podman
podman-compose up -d
Once all the containers finish starting, test if it's working by pointing your browser to https://frontend.yourdomain.tld and confirm that Piped loads the "Trending" page.
## Debuging
In case of problems, you can check the logs with \*-compose logs <container>. For exemple:
docker-compose logs nginx # For docker users
or
podman-compose logs piped-backend # for Podman users
If you need really verbose logs from Nginx, it is possible to enable debug mode, but it requires forcing the container to run `nginx-debug` instead of plain `nging` and adding a `error_log ... debug;` statement to [[config/piped.conf.template]].

View File

@ -0,0 +1,73 @@
server {
listen *:80;
# listen [::]:80;
server_name ${FRONTEND_HOSTNAME} ${BACKEND_HOSTNAME} ${PROXY_HOSTNAME};
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header 'Referrer-Policy' 'no-referrer';
# enforce https
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen *:443 ssl http2;
# listen [::]:443 ssl http2;
server_name ${FRONTEND_HOSTNAME};
include snippets/ssl.conf;
# Path to the root of your installation
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Connection "keep-alive";
proxy_pass http://piped-frontend;
}
}
proxy_cache_path /tmp/pipedapi_cache levels=1:2 keys_zone=pipedapi:4m max_size=2g inactive=60m use_temp_path=off;
server {
listen *:443 ssl http2;
# listen [::]:443 ssl http2;
server_name ${BACKEND_HOSTNAME};
include snippets/ssl.conf;
# Path to the root of your installation
location / {
proxy_cache pipedapi;
proxy_pass http://piped-backend:8080;
proxy_http_version 1.1;
proxy_set_header Connection "keep-alive";
}
}
server {
listen *:443 ssl http2;
# listen [::]:443 ssl http2;
server_name ${PROXY_HOSTNAME};
include snippets/ssl.conf;
location ~ (/videoplayback|/api/v4/|/api/manifest/) {
include snippets/ytproxy.conf;
add_header Cache-Control private always;
proxy_pass http://unix:/var/run/ytproxy/actix.sock;
}
location / {
include snippets/ytproxy.conf;
add_header Cache-Control "public, max-age=604800";
proxy_pass http://unix:/var/run/ytproxy/actix.sock;
}
}

24
config/piped.crt Normal file
View File

@ -0,0 +1,24 @@
-----BEGIN CERTIFICATE-----
MIID/zCCAuegAwIBAgIUdqkJshly/62rDQeqUUqyQiU5yJ8wDQYJKoZIhvcNAQEL
BQAwgY4xCzAJBgNVBAYTAkJSMQswCQYDVQQIDAJTUDESMBAGA1UEBwwJU2FvIFBh
dWxvMRAwDgYDVQQKDAdleGFtcGxlMRQwEgYDVQQLDAtkZXZlbG9wbWVudDEWMBQG
A1UEAwwNKi5leGFtcGxlLmNvbTEeMBwGCSqGSIb3DQEJARYPbWFpbC5leG1wbGUu
Y29tMB4XDTIzMDcyMjIxMzkzMloXDTI0MDcyMTIxMzkzMlowgY4xCzAJBgNVBAYT
AkJSMQswCQYDVQQIDAJTUDESMBAGA1UEBwwJU2FvIFBhdWxvMRAwDgYDVQQKDAdl
eGFtcGxlMRQwEgYDVQQLDAtkZXZlbG9wbWVudDEWMBQGA1UEAwwNKi5leGFtcGxl
LmNvbTEeMBwGCSqGSIb3DQEJARYPbWFpbC5leG1wbGUuY29tMIIBIjANBgkqhkiG
9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1Q4tR+qHr5wNuFvp18+B5rLSrZWrqb/9zZaE
65mTk70J7Wfa5kt+8wf7N7590ecazXcbuCnFmCBIMZGdZNE02C/0AQvgKKCmORhj
XDRlWupilguS6dMXhffgisZ/Dent9cQjZIFkOJ0ZNILbarPkQBvhdkFrn302Nujc
uF4cYrHvUa3WmtoUZspWqPKkl0AluOPTYm2QLGdT1M+nmr8AZs7JplYrBzT65fy/
Nvtl+VxVcGqRrTVDmsWJIO8Gx/NW/7wfK6GQxWYeUotXNZmBrr5jOB0YttMQrgUn
QydSpK6qrVWEBr8IaR+jS+eXJmWrEi0QBn6npwvx0+g+Jt5jWQIDAQABo1MwUTAd
BgNVHQ4EFgQU7+AGX4fm74vjDt4+9nyB0ElAIkgwHwYDVR0jBBgwFoAU7+AGX4fm
74vjDt4+9nyB0ElAIkgwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOC
AQEAI4k5IYFkqMvmw1Nd53umzhSIayT+T54VHBz59ty5OR0m+6FpoZaon5+FnWlq
5otCrOjGG6jzhku+PMsaU8iBcgfAJpZASicuCFXBcc6yAGveTvnHFAwlhEoI5oI/
95tkh1hMy3hDZmMvYCOGnvS7vVY2JqPCFvgfRaMAaoe8gnlPOTx97fnnn/8+Aazi
puny/PYud3vaIfCzLWA/8Zo+r47sRlLkQQ9hrgcjrRW7oT+PHmY/31SWP+mFxwF7
v6FVArSABFRObkhgiFL3APKLnx34hWEA/8TpRryuYQdz7BYkUzJHpxzzn91KeLdm
492KHQ71tVy6zV5iB1aev8nVYw==
-----END CERTIFICATE-----

28
config/piped.key Normal file
View File

@ -0,0 +1,28 @@
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDVDi1H6oevnA24
W+nXz4HmstKtlaupv/3NloTrmZOTvQntZ9rmS37zB/s3vn3R5xrNdxu4KcWYIEgx
kZ1k0TTYL/QBC+AooKY5GGNcNGVa6mKWC5Lp0xeF9+CKxn8N6e31xCNkgWQ4nRk0
gttqs+RAG+F2QWuffTY26Ny4Xhxise9Rrdaa2hRmylao8qSXQCW449NibZAsZ1PU
z6eavwBmzsmmVisHNPrl/L82+2X5XFVwapGtNUOaxYkg7wbH81b/vB8roZDFZh5S
i1c1mYGuvmM4HRi20xCuBSdDJ1KkrqqtVYQGvwhpH6NL55cmZasSLRAGfqenC/HT
6D4m3mNZAgMBAAECggEAGaZVST0xDLFK7ZETPAodZ3rL5l4Ihq04jxG5+utIWxb9
JPnF3sfkBrpFQlbKqwSZs3bNfYR553CrgFw5iLOvGv/a7m1RlVKR8HnBLI6aTTG+
oLXQABqL0HMhM1PmY/Rv05DDegwh1rcDG9FNPTFfH2C76hLCNDdM2Zt7Ry79V9w/
rfZPGJgQS1ji7whLEGmv+z8JFOpw4rxtgvMUG+M73v5bS9j6VWZ0FLMKoXChvQka
gTP4UtjW2sHPBHVPFVhba0UPzLPY87uvY2esvIqC11NhPLs0oXBv9EnlgDzi4/gF
zwY4TpByBJ+2LOEU3QC0ezW4wz3M/p5NQjDMu9I3IQKBgQD/2nUVynNccMlW7STH
zTihukg9paweCrElncSwluwf0jf3/0EizDbfCPRMBM5la5J8+mYEH/Lxa+XjpVhn
CSnfDCRa68iwr+1wyn6YA0hvTHARbSVw74P3UnUafVAdhDlF9WGqQ6HUnMDHArSD
u/x6q4J3daGegXn8EdLWUlB/JQKBgQDVLXCGtMjOkAUT+42uTavf+0PnogkX5KuY
VYXmwrF3MCDmefkfYnyJK2Luecag+nSoK9Sc553DkCAoGiyreDPNXKNIYLGxDPMo
d4hcrt6Ol9W7PTpzQoE3Lz8Bm2N3zuyblV0xRsGOOTQirMSz052CTD+nhlUkxvrl
EJnzVBoHJQKBgAMRianzPaL0L1X9jh1fVriJ1Wf33rKVij5bQAqmJLrU+Jre0tcp
/9Z48wUeYaNRwPYCwsp136IJmz45s2+46mmkaaM1hLipw31A0HfeQjYjgoyS9IoA
NWL3+DOTISzZcx5lrQAvw3cbUiyQ2b1iucp22B+6p2+ROfdN92tenVyJAoGANAqO
wOPbbcns427yrI2bmuddMWv2KlYRqfOe57G53y3pqjo2nfnOCzKDSVKDMgNSfUeN
9Ov6MKa7ou6Y3xdOFiE6X03zsxRFPCjKKk4qWMcqTzZoUYD3yIAJMpw7kSD71BOH
l6L9V3oRhzGEJ55OgmOY2o3JtVu6HjeKTcPHQt0CgYEAtpjb6sajZhM1sDlT2N/R
V9t+k+N9dRDy8acpGRxm5HGhqJMev6PTowGqCxex+F/meDioCoybNYa7JPAwwDvt
XzqUrgCIceQ2TLGETQLDgfu325aJo/WRQZrnrN0XY0Gc4wnI/GXUmz2VcVALLYfb
jmPy4nc4xejo/H+MyUc8Ksw=
-----END PRIVATE KEY-----

12
config/ssl.conf Normal file
View File

@ -0,0 +1,12 @@
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /etc/nginx/ssl/piped.crt;
ssl_certificate_key /etc/nginx/ssl/piped.key;
add_header 'Referrer-Policy' 'no-referrer';
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;

View File

@ -1,18 +1,17 @@
proxy_buffering on;
proxy_buffers 1024 16k;
proxy_set_header X-Forwarded-For "";
proxy_set_header CF-Connecting-IP "";
proxy_hide_header "alt-svc";
sendfile on;
sendfile_max_chunk 512k;
tcp_nopush on;
access_log off;
aio threads=default;
aio_write on;
directio 16m;
proxy_buffering on;
proxy_buffers 1024 16k;
proxy_hide_header "alt-svc";
proxy_hide_header Cache-Control;
proxy_hide_header etag;
proxy_http_version 1.1;
proxy_set_header Connection keep-alive;
proxy_max_temp_file_size 32m;
access_log off;
proxy_pass http://unix:/var/run/ytproxy/actix.sock;
proxy_set_header CF-Connecting-IP "";
proxy_set_header Connection keep-alive;
proxy_set_header X-Forwarded-For "";
sendfile on;
sendfile_max_chunk 512k;
tcp_nopush on;

63
configuration.env Normal file
View File

@ -0,0 +1,63 @@
###########################
# Hostname settings #
###########################
# Fully Qualified names of the services used by Piped
BACKEND_HOSTNAME=backend-host.example.com
FRONTEND_HOSTNAME=frontend-host.example.com
PROXY_HOSTNAME=proxy-host.example.com
###########################
# API container settings #
###########################
# Port the API server will listen on.
# this is used by other containers in this compose project and will
# listen only on the docker/podman network.
# If you need the API listening publicly, publish it using
# port:
# - <ext-port>:<int-port>
# on the piped-backend service.
PORT=8080
# The number of workers to use for the server
HTTP_WORKERS=2
# iFull URLs for the services. These need to be configured
# on your DNS service
FRONTEND_URL=https://frontend-host.example.com
API_URL=https://backend-host.example.com
PROXY_PART=https://proxy-host.example.com
# Outgoing HTTP Proxy - eg: 127.0.0.1:8118
#HTTP_PROXY=127.0.0.1:8118
# Captcha Parameters
CAPTCHA_BASE_URL=https://api.capmonster.cloud/
CAPTCHA_API_KEY=INSERT_HERE
# Enable haveibeenpwned compromised password API
COMPROMISED_PASSWORD_CHECK=true
# Disable Registration
DISABLE_REGISTRATION=false
# Feed Retention Time in Days
FEED_RETENTION=30
###########################
# database settings #
###########################
# Settings for the Postgres database
POSTGRES_DB=piped
POSTGRES_HOST=postgres
POSTGRES_USER=piped
POSTGRES_PASSWORD=changeme
###########################
# Watchtower settings #
###########################
WATCHTOWER_CLEANUP=true
WATCHTOWER_INCLUDE_RESTARTING=true

View File

@ -1,16 +0,0 @@
#!/bin/sh
echo "Enter a hostname for the Frontend (eg: piped.kavin.rocks):" && read -r frontend
echo "Enter a hostname for the Backend (eg: pipedapi.kavin.rocks):" && read -r backend
echo "Enter a hostname for the Proxy (eg: pipedproxy.kavin.rocks):" && read -r proxy
echo "Enter the reverse proxy you would like to use (either caddy or nginx):" && read -r reverseproxy
rm -rf config/
rm -f docker-compose.yml
cp -r template/ config/
sed -i "s/FRONTEND_HOSTNAME/$frontend/g" config/*
sed -i "s/BACKEND_HOSTNAME/$backend/g" config/*
sed -i "s/PROXY_HOSTNAME/$proxy/g" config/*
mv config/docker-compose.$reverseproxy.yml docker-compose.yml

81
docker-compose.yml Normal file
View File

@ -0,0 +1,81 @@
version: "3"
services:
piped-frontend:
image: 1337kavin/piped-frontend:latest
container_name: piped-frontend
restart: unless-stopped
depends_on:
- piped-backend
env_file:
- configuration.env
volumes:
- ./entrypoint.d/host_replace.envsh:/docker-entrypoint.d/99-host_replace.envsh
piped-proxy:
image: 1337kavin/piped-proxy:latest
container_name: piped-proxy
restart: unless-stopped
environment:
- UDS=1
volumes:
- piped-proxy:/app/socket:z
piped-backend:
image: 1337kavin/piped:latest
container_name: piped-backend
restart: unless-stopped
env_file:
- configuration.env
volumes:
- ./entrypoint.d/backend-startup.sh:/app/backend-startup.sh:ro
command: /bin/sh /app/backend-startup.sh
depends_on:
- postgres
nginx:
image: nginx:mainline-alpine
container_name: nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
env_file:
- configuration.env
volumes:
- ./config/piped.conf.template:/etc/nginx/templates/piped.conf.template:ro
- ./config/ytproxy.conf:/etc/nginx/snippets/ytproxy.conf:ro
- ./config/ssl.conf:/etc/nginx/snippets/ssl.conf
- ./config/piped.key:/etc/nginx/ssl/piped.key
- ./config/piped.crt:/etc/nginx/ssl/piped.crt
- piped-proxy:/var/run/ytproxy:z
- ./entrypoint.d/host_replace.envsh:/docker-entrypoint.d/99-host_replace.envsh
depends_on:
- piped-backend
- piped-proxy
- piped-frontend
postgres:
image: postgres:15
restart: unless-stopped
env_file:
- configuration.env
volumes:
- ./data/db:/var/lib/postgresql/data
# Podman users, be aware that watchtower relies on a Docker socket to work
# look at the logs after startup with `podman-compose logs watchtower`
# If you see errors, make sure the podman service is enabled and the
# socket /var/run/docker.sock exists.
# If errors persist, comment this entire section.
# Watchtower is used to update the images automaticaly. Fortunately,
# Podman offers a way to do that using Dystemd, you could use that
# for auto-updates instead.
watchtower:
image: containrrr/watchtower
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
env_file:
- configuration.env
command: piped-frontend piped-proxy piped-backend nginx postgres watchtower
volumes:
piped-proxy: null

View File

@ -0,0 +1,9 @@
#!/bin/sh
echo "hibernate.connection.url: jdbc:postgresql://${POSTGRES_HOST}:5432/${POSTGRES_DB}" > /app/config.properties
echo "hibernate.connection.username: ${POSTGRES_USER}" >> /app/config.properties
echo "hibernate.connection.password: ${POSTGRES_PASSWORD}" >> /app/config.properties
echo "hibernate.connection.driver_class: org.postgresql.Driver" >> /app/config.properties
echo "hibernate.dialect: org.hibernate.dialect.PostgreSQLDialect" >> /app/config.properties
exec java -server -Xmx1G -XX:+UnlockExperimentalVMOptions -XX:+OptimizeStringConcat -XX:+UseStringDeduplication -XX:+UseCompressedOops -XX:+UseNUMA -XX:+UseG1GC -Xshare:on -jar /app/piped.jar

View File

@ -0,0 +1,9 @@
#!/bin/sh
if [ -d "/usr/share/nginx/html/assets" ]; then
sed -i 's/pipedapi.kavin.rocks/'$BACKEND_HOSTNAME'/g' /usr/share/nginx/html/assets/*
fi
if [ -f "/etc/nginx/nginx.conf" ] ; then
sed -i '/user/s/nginx/root/' /etc/nginx/nginx.conf
fi

View File

@ -1,47 +0,0 @@
(global) {
header {
# disable FLoC tracking
Permissions-Policy interest-cohort=()
# enable HSTS
Strict-Transport-Security max-age=31536000;
# keep referrer data off
Referrer-Policy no-referrer
# prevent for appearing in search engine for private instances (option)
#X-Robots-Tag noindex
}
}
FRONTEND_HOSTNAME {
reverse_proxy pipedfrontend:80
import global
}
BACKEND_HOSTNAME {
reverse_proxy nginx:80
import global
}
PROXY_HOSTNAME {
@ytproxy path /videoplayback* /api/v4/* /api/manifest/*
import global
route {
header @ytproxy {
Cache-Control private always
}
header / {
Cache-Control "public, max-age=604800"
}
reverse_proxy unix//var/run/ytproxy/actix.sock {
header_up -CF-Connecting-IP
header_up -X-Forwarded-For
header_down -etag
header_down -alt-svc
}
}
}

View File

@ -1,37 +0,0 @@
# The port to Listen on.
PORT: 8080
# The number of workers to use for the server
HTTP_WORKERS: 2
# Proxy
PROXY_PART: https://PROXY_HOSTNAME
# Outgoing HTTP Proxy - eg: 127.0.0.1:8118
#HTTP_PROXY: 127.0.0.1:8118
# Captcha Parameters
CAPTCHA_BASE_URL: https://api.capmonster.cloud/
CAPTCHA_API_KEY: INSERT_HERE
# Public API URL
API_URL: https://BACKEND_HOSTNAME
# Public Frontend URL
FRONTEND_URL: https://FRONTEND_HOSTNAME
# Enable haveibeenpwned compromised password API
COMPROMISED_PASSWORD_CHECK: true
# Disable Registration
DISABLE_REGISTRATION: false
# Feed Retention Time in Days
FEED_RETENTION: 30
# Hibernate properties
hibernate.connection.url: jdbc:postgresql://postgres:5432/piped
hibernate.connection.driver_class: org.postgresql.Driver
hibernate.dialect: org.hibernate.dialect.PostgreSQLDialect
hibernate.connection.username: piped
hibernate.connection.password: changeme

View File

@ -1,71 +0,0 @@
version: "3"
services:
pipedfrontend:
image: 1337kavin/piped-frontend:latest
restart: unless-stopped
depends_on:
- piped
container_name: piped-frontend
entrypoint: ash -c 'sed -i s/pipedapi.kavin.rocks/BACKEND_HOSTNAME/g /usr/share/nginx/html/assets/* && /docker-entrypoint.sh && nginx -g "daemon off;"'
piped-proxy:
image: 1337kavin/piped-proxy:latest
restart: unless-stopped
environment:
- UDS=1
volumes:
- piped-proxy:/app/socket
container_name: piped-proxy
piped:
image: 1337kavin/piped:latest
restart: unless-stopped
volumes:
- ./config/config.properties:/app/config.properties:ro
depends_on:
- postgres
container_name: piped-backend
nginx:
image: nginx:mainline-alpine
restart: unless-stopped
volumes:
- ./config/nginx.conf:/etc/nginx/nginx.conf:ro
- ./config/pipedapi.conf:/etc/nginx/conf.d/pipedapi.conf:ro
container_name: nginx
depends_on:
- piped
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- ./config/Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- piped-proxy:/var/run/ytproxy
container_name: caddy
postgres:
image: postgres:15
restart: unless-stopped
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=piped
- POSTGRES_USER=piped
- POSTGRES_PASSWORD=changeme
container_name: postgres
watchtower:
image: containrrr/watchtower
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
environment:
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_INCLUDE_RESTARTING=true
container_name: watchtower
command: piped-frontend piped-backend piped-proxy nginx caddy postgres watchtower
volumes:
caddy_data: null
piped-proxy: null

View File

@ -1,66 +0,0 @@
version: "3"
services:
pipedfrontend:
image: 1337kavin/piped-frontend:latest
restart: unless-stopped
depends_on:
- piped
container_name: piped-frontend
entrypoint: ash -c 'sed -i s/pipedapi.kavin.rocks/BACKEND_HOSTNAME/g /usr/share/nginx/html/assets/* && /docker-entrypoint.sh && nginx -g "daemon off;"'
piped-proxy:
image: 1337kavin/piped-proxy:latest
restart: unless-stopped
environment:
- UDS=1
volumes:
- piped-proxy:/app/socket
container_name: piped-proxy
piped:
image: 1337kavin/piped:latest
restart: unless-stopped
volumes:
- ./config/config.properties:/app/config.properties:ro
depends_on:
- postgres
container_name: piped-backend
nginx:
image: nginx:mainline-alpine
restart: unless-stopped
ports:
- "8080:80"
volumes:
- ./config/nginx.conf:/etc/nginx/nginx.conf:ro
- ./config/pipedapi.conf:/etc/nginx/conf.d/pipedapi.conf:ro
- ./config/pipedproxy.conf:/etc/nginx/conf.d/pipedproxy.conf:ro
- ./config/pipedfrontend.conf:/etc/nginx/conf.d/pipedfrontend.conf:ro
- ./config/ytproxy.conf:/etc/nginx/snippets/ytproxy.conf:ro
- piped-proxy:/var/run/ytproxy
container_name: nginx
depends_on:
- piped
- piped-proxy
- pipedfrontend
postgres:
image: postgres:15
restart: unless-stopped
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=piped
- POSTGRES_USER=piped
- POSTGRES_PASSWORD=changeme
container_name: postgres
watchtower:
image: containrrr/watchtower
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
environment:
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_INCLUDE_RESTARTING=true
container_name: watchtower
command: piped-frontend piped-backend piped-proxy varnish nginx postgres watchtower
volumes:
piped-proxy: null

View File

@ -1,33 +0,0 @@
user root;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_names_hash_bucket_size 128;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nodelay on;
keepalive_timeout 65;
resolver 127.0.0.11 ipv6=off valid=10s;
include /etc/nginx/conf.d/*.conf;
}

View File

@ -1,12 +0,0 @@
server {
listen 80;
server_name FRONTEND_HOSTNAME;
set $backend "http://pipedfrontend:80";
location / {
proxy_pass $backend;
proxy_http_version 1.1;
proxy_set_header Connection "keep-alive";
}
}

View File

@ -1,14 +0,0 @@
server {
listen 80;
server_name PROXY_HOSTNAME;
location ~ (/videoplayback|/api/v4/|/api/manifest/) {
include snippets/ytproxy.conf;
add_header Cache-Control private always;
}
location / {
include snippets/ytproxy.conf;
add_header Cache-Control "public, max-age=604800";
}
}