Compare commits

...

10 commits

Author SHA1 Message Date
3529e01f19
a
Some checks failed
File-uploader-crystal CI / build (push) Failing after 14s
2025-04-07 00:52:47 -04:00
a4562ca005
feat(webserver): add host option to the configuration 2025-01-02 19:06:05 -03:00
c554b772c8
0.9.3.5: only generate thumbnails on known extensions, remove trailing '/' from config.files and config.thumbnails 2024-11-26 20:56:58 -03:00
bb9ecee67b
0.9.3.4: Fix what I did yesterday 2024-11-21 13:30:19 -03:00
cb75b97520
0.9.3.3: Better handling when retrieving files, move rate limiter 2024-11-21 04:02:12 -03:00
fdfa782e91
0.9.3.2-1: Update docker compose file 2024-11-21 03:23:56 -03:00
99c22095f9
0.9.3.2: Delete entry from the DB is the file doesn't exists on the filesystem 2024-11-21 03:23:29 -03:00
9de4960932
0.9.3.1: Update Dockerfile and add compose file 2024-11-19 23:16:32 -03:00
0002c81429
0.9.3: BUGFIX! Fix deletion of thumbnails on check_old_files job.
- Add colors to logs
- Use static table names instead of config provided ones, it's kinda
  stupid to give the user an option to set the name of the table if I'm
  developing it for sqlite
2024-11-19 22:39:23 -03:00
b51513339c
0.9.2: Fix thumbnail folder generation, better chatterino config generation and better error hadling 2024-10-21 13:54:51 -03:00
25 changed files with 200 additions and 1923 deletions

2
.gitignore vendored
View file

@ -5,3 +5,5 @@
*.dwarf
data
torexitnodes.txt
files
thumbnails

View file

@ -1,35 +0,0 @@
# Based on https://github.com/iv-org/invidious/blob/master/docker/Dockerfile
FROM crystallang/crystal:1.13.2-alpine AS builder
RUN apk add --no-cache sqlite-static yaml-static
ARG release
WORKDIR /file-uploader-crystal
COPY ./shard.yml ./shard.yml
COPY ./shard.lock ./shard.lock
RUN shards install --production
COPY ./src/ ./src/
# TODO: .git folder is required for building this is destructive.
# See definition of CURRENT_BRANCH, CURRENT_COMMIT and CURRENT_VERSION.
COPY ./.git/ ./.git/
RUN crystal build ./src/file-uploader-crystal.cr \
--release \
--static --warnings all
RUN apk add --no-cache tini
FROM alpine:3.18
WORKDIR /file-uploader-crystal
RUN addgroup -g 1000 -S file-uploader-crystal && \
adduser -u 1000 -S file-uploader-crystal -G file-uploader-crystal
COPY --chown=file-uploader-crystal ./config/config.* ./config/
RUN mv -n config/config.example.yml config/config.yml
COPY --from=builder /file-uploader-crystal/file-uploader-crystal .
RUN chmod o+rX -R ./config
EXPOSE 8080
USER file-uploader-crystal
ENTRYPOINT ["/sbin/tini", "--"]
CMD [ "/file-uploader-crystal/file-uploader-crystal" ]

View file

@ -1,88 +0,0 @@
# file-uploader
Simple file uploader made on Crystal.
~~I'm making this to replace my current File uploader hosted on https://ayaya.beauty which uses https://github.com/nokonoko/uguu~~
Already replaced lol.
## Features
- Temporary file uploads like Uguu
- File deletion link (not available in frontend for now)
- Chatterino and ShareX support
- Video Thumbnails for Chatterino and FrankerFaceZ (Requires `ffmpeg` to be installed, can be disabled.)
- Rate Limiting
- [Small Admin API](./src/handling/admin.cr) that allows you to delete files, reset rate limits and more (Needs to be enabled in the configuration)
- Unix socket support if you don't want to deal with all the TCP overhead
- Automatic protocol detection (HTTPS or HTTP)
- Low memory usage: Between 6MB at idle and 25MB if a file is being uploaded or retrieved. It will depend of your traffic.
## Usage
- Clone this repository, compile it using `shards build --release` and execute the server using `./bin/file-uploader`.
- Change the settings file `./config/config.yml` acording to what you need.
## NGINX Server block
Assuming you are already using NGINX and you know how to use it, you can use this example server block.
```
server {
# You can keep the domain prefixed with `~.` if you want
# to allow users to use any domain to upload and retrieve
# files. Like xdxd.example.com or lolol.example.com .
# This will only work if you have a wildcard domain.
server_name ~.example.com example.com;
location / {
proxy_pass http://127.0.0.1:8080;
# This if you want to use a UNIX socket instead
#proxy_pass http://unix:/tmp/file-uploader.sock;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_pass_request_headers on;
}
# This should be the size_limit value (from config.yml)
client_max_body_size 512M;
listen 443 ssl;
http2 on;
}
```
## Systemd user service example
```
[Unit]
Description=file-uploader-crystal
After=network.target
[Service]
Type=simple
Restart=always
RestartSec=2
LimitNOFILE=4096
Environment="KEMAL_ENV=production"
ExecStart=%h/file-uploader-crystal/bin/file-uploader
WorkingDirectory=%h/file-uploader-crystal/
[Install]
WantedBy=default.target
```
## TODO
- ~~Add file size limit~~ ADDED
- ~~Fix error when accessing `http://127.0.0.1:8080` with an empty DB.~~ Fixed somehow.
- Better frontend...
- ~~Disable file deletion if `deleteFilesCheck` or `deleteFilesAfter` is set to `0`~~ DONE
- ~~Disable delete key if `deleteKeyLength` is `0`~~ DONE (But I think there is a better way to do it)
- ~~Exit if `fileameLength` is `0`~~ DONE
- ~~Disable file limit if `size_limit` is `0`~~ DONE
- ~~Prevent files from being overwritten in the event of a name collision~~ DONE
- Dockerfile and Docker image (Crystal doesn't has dependency hell like other languages so is not really necessary to do, but useful for people that want instant deploy)
- Custom file expiration using headers (Like rustypaste)
- Small CLI to upload files (like `rpaste` from rustypaste)
- Add more endpoints to Admin API
-

View file

@ -1,44 +0,0 @@
files: "./files"
thumbnails: "./thumbnails"
generateThumbnails: true
db: "./db.sqlite3"
dbTableName: "files"
adminEnabled: true
adminApiKey: "asd"
fileameLength: 3
# In MiB
size_limit: 512
port: 8080
blockTorAddresses: true
# Every hour
torExitNodesCheck: 1600
torExitNodesUrl: "https://check.torproject.org/exit-addresses"
torExitNodesFile: "./torexitnodes.txt"
torMessage: "TOR IS BLOCKED!"
filesPerIP: 2
ipTableName: "ips"
rateLimitPeriod: 20
rateLimitMessage: ""
# If you define the unix socket, it will only listen on the socket and not the port.
#unix_socket: "/tmp/file-uploader.sock"
# In days
deleteFilesAfter: 1
# In seconds
deleteFilesCheck: 1600
deleteKeyLength: 4
siteInfo: "Whatever you want to put here"
siteWarning: "WARNING!"
log_level: "debug"
blockedExtensions:
- "exe"
# List of useragents that use OpenGraph to gather file information
opengraphUseragents:
- "chatterino-api-cache/"
- "FFZBot/"
- "Twitterbot/"
alternativeDomains:
- "ayaya.beauty"
- "lamartina.gay"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

View file

@ -1,169 +0,0 @@
// By chatgpt becuase I hate frontend and javascript kill me
document.addEventListener("DOMContentLoaded", () => {
const dropArea = document.getElementById("drop-area");
const fileInput = document.getElementById("fileElem");
const uploadStatus = document.getElementById("upload-status");
// Prevent default drag behaviors
["dragenter", "dragover", "dragleave", "drop"].forEach((eventName) => {
dropArea.addEventListener(eventName, preventDefaults, false);
document.body.addEventListener(eventName, preventDefaults, false);
});
// Highlight drop area when item is dragged over
["dragenter", "dragover"].forEach((eventName) => {
dropArea.addEventListener(eventName, highlight, false);
});
["dragleave", "drop"].forEach((eventName) => {
dropArea.addEventListener(eventName, unhighlight, false);
});
// Handle dropped files
dropArea.addEventListener("drop", handleDrop, false);
dropArea.addEventListener("click", () => fileInput.click());
// Handle file selection
fileInput.addEventListener(
"change",
() => {
const files = fileInput.files;
handleFiles(files);
},
false
);
// Handle pasted files
document.addEventListener("paste", handlePaste, false);
function preventDefaults(e) {
e.preventDefault();
e.stopPropagation();
}
function highlight() {
dropArea.classList.add("highlight");
}
function unhighlight() {
dropArea.classList.remove("highlight");
}
function handleDrop(e) {
const dt = e.dataTransfer;
const files = dt.files;
handleFiles(files);
}
function handlePaste(e) {
const items = e.clipboardData.items;
for (let i = 0; i < items.length; i++) {
const item = items[i];
if (item.kind === "file") {
const file = item.getAsFile();
handleFiles([file]);
}
}
}
function handleFiles(files) {
if (files.length > 0) {
for (const file of files) {
uploadFile(file);
}
}
}
function uploadFile(file) {
const url = "upload"; // Replace with your upload URL
const xhr = new XMLHttpRequest();
// Create a new upload status container and link elements
const uploadContainer = document.createElement("div");
const statusLink = document.createElement("div");
const uploadText = document.createElement("span");
const buttons = document.createElement("div");
const copyButton = document.createElement("button");
const deleteButton = document.createElement("button");
uploadContainer.className = "upload-status"; // Use the existing CSS class for styling
uploadContainer.appendChild(uploadText);
uploadContainer.appendChild(statusLink);
buttons.appendChild(copyButton)
buttons.appendChild(deleteButton)
uploadContainer.appendChild(buttons)
uploadStatus.appendChild(uploadContainer);
// Update upload text
uploadText.innerHTML = "0%";
uploadText.className = "percent";
statusLink.className = "status";
copyButton.className = "copy-button"; // Add class for styling
copyButton.innerHTML = "Copiar"; // Set button text
deleteButton.className = "delete-button";
deleteButton.innerHTML = "Borrar";
copyButton.style.display = "none";
deleteButton.style.display = "none";
// Update progress text
xhr.upload.addEventListener("progress", (e) => {
if (e.lengthComputable) {
const percentComplete = Math.round((e.loaded / e.total) * 100);
uploadText.innerHTML = `${percentComplete}%`; // Update the text with the percentage
}
});
xhr.onerror = () => {
console.error("Error:", xhr.status, xhr.statusText, xhr.responseText);
statusLink.textContent = "Error desconocido";
};
xhr.onload = () => {
// console.log("Response Status:", xhr.status);
// console.log("Response Text:", xhr.responseText);
if (xhr.status === 200) {
try {
const response = JSON.parse(xhr.responseText);
const fileLink = response.link;
statusLink.innerHTML = `<a href="${fileLink}" target="_blank">${fileLink}</a>`;
copyButton.style.display = "inline";
copyButton.onclick = () => copyToClipboard(fileLink);
deleteButton.style.display = "inline";
deleteButton.onclick = () => {
window.open(response.deleteLink, "_blank");
};
} catch (error) {
statusLink.textContent =
"Error desconocido, habla con el administrador";
}
} else if (xhr.status >= 400 && xhr.status < 500) {
try {
const errorResponse = JSON.parse(xhr.responseText);
statusLink.textContent = errorResponse.error || "Error del cliente.";
} catch (e) {
statusLink.textContent = "Error del cliente.";
}
} else {
statusLink.textContent = "Error del servidor.";
}
};
// Send file
const formData = new FormData();
formData.append("file", file);
xhr.open("POST", url, true);
xhr.send(formData);
}
// Function to copy the link to the clipboard
function copyToClipboard(text) {
navigator.clipboard
.writeText(text)
.then(() => {
// alert("Link copied to clipboard!"); // Notify the user
})
.catch((err) => {
console.error("Failed to copy: ", err);
});
}
});

View file

@ -1,228 +0,0 @@
@font-face {
font-family: "FG";
font-weight: 500;
src: url('framd.ttf');
}
@font-face {
font-family: "FG";
font-weight: 900;
src: url('frahv.ttf');
}
@font-face {
font-family: "XFG";
font-weight: 900;
src: url('frahvmod.ttf');
}
html {
font-family: "FG";
background-image: linear-gradient(to bottom,
rgba(11, 11, 11, 0.92),
rgba(11, 11, 11, 0.92)),
url(./bliss-small.avif);
background-attachment: fixed;
background-repeat: no-repeat;
background-size: cover;
}
body {
/* font-family: Arial, sans-serif; */
/* background-color: #111; */
margin: 0;
padding: 20px;
}
p,
h1,
h2,
h3,
h4,
h5 {
color: aliceblue
}
h1 {
font-family: "FG";
font-weight: 200;
max-width: 100%;
overflow-wrap: break-word;
}
a {
text-decoration: none;
}
.bottom {
font-size: 0.9em;
/* margin-top: 1ch;*/
flex: 1;
text-align: center;
}
.bottom>p {
margin: 10px 0px;
}
.percent {
color: aliceblue
}
.container {
max-width: 800px;
margin: auto;
/* background: white; */
/*! padding: 20px; */
border-radius: 0px;
/*! box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); */
}
#drop-area {
/*! border: 2px solid #00ff00; */
/*! border-radius: 6px; */
/*! padding-left: 10px; */
/*! padding-right: 10px; */
text-align: center;
position: relative;
width: fit-content;
margin: 0 auto;
/* Center the element */
display: block;
/* Ensure it behaves as a block-level element */
background: rgba(202, 230, 190, .75);
border: 1px solid #b7d1a0;
border-radius: 4px;
color: #468847;
cursor: pointer;
/*! display: inline-block; */
font-size: 24px;
padding: 28px 48px;
text-shadow: 0 1px hsla(0, 0%, 100%, .5);
transition: background-color .25s, width .5s, height .5s;
}
.button {
display: inline-block;
padding: 10px 20px;
/* background: #; */
color: white;
border-radius: 5px;
cursor: pointer;
/* margin-top: 10px; */
}
.upload-status {
margin-top: 10px;
}
nav a,
nav>ul {
list-style: none;
margin: 0;
padding: 0;
text-align: center;
}
#upload-status {
margin: 20px;
/* Adjust as needed */
}
.upload-status {
display: flex;
align-items: center;
justify-content: space-between;
border: 2px solid #999;
/* Optional styling for the status box */
padding: 5px;
/* Optional padding */
/*! border-radius: 6px; */
/* Optional rounded corners */
/*! background-color: #f9f9f9; */
/* Optional background color */
}
.link-container {
display: flex;
align-items: center;
margin-left: auto;
/* Pushes the link and button to the right */
}
.link {
color: #ffb6c1;
text-decoration: none;
/* Remove underline from link */
margin-right: 5px;
/* Space between link and button */
}
.link:hover {
text-decoration: underline;
/* Optional: underline on hover */
}
.copy-button {
display: inline;
background-color: #7a6fff;
/* Button background color */
color: white;
/* Button text color */
border: none;
/* Remove border */
border-radius: 3px;
/* Rounded corners for the button */
padding: 5px 10px;
/* Button padding */
cursor: pointer;
/* Pointer cursor on hover */
font-weight: bold;
}
.delete-button {
display: inline;
background-color: #ff6f6f;
/* Button background color */
color: white;
/* Button text color */
border: none;
/* Remove border */
border-radius: 3px;
/* Rounded corners for the button */
padding: 5px 10px;
/* Button padding */
cursor: pointer;
/* Pointer cursor on hover */
margin-left: 6px;
font-weight: bold;
}
.copy-button:hover {
background-color: #6057ce;
/* Darker shade on hover */
}
.delete-button:hover {
background-color: #ce5757;
/* Darker shade on hover */
}
.status {
color: rgb(255, 132, 0);
}
a:link {
color: #ffb6c1
}
a:visited {
color: #ffb6c1
}
a:hover {
color: #ffb6c1
}

View file

@ -1,165 +0,0 @@
// document.addEventListener("DOMContentLoaded", () => {
// const dropArea = document.getElementById("drop-area");
// const fileInput = document.getElementById("fileElem");
// const progressContainer = document.getElementById("progress-container");
// const progressBar = document.getElementById("progress-bar");
// const status = document.getElementById("status");
// // Prevent default drag behaviors
// ["dragenter", "dragover", "dragleave", "drop"].forEach(eventName => {
// dropArea.addEventListener(eventName, preventDefaults, false);
// document.body.addEventListener(eventName, preventDefaults, false);
// });
// // Highlight drop area when item is dragged over
// ["dragenter", "dragover"].forEach(eventName => {
// dropArea.addEventListener(eventName, highlight, false);
// });
// ["dragleave", "drop"].forEach(eventName => {
// dropArea.addEventListener(eventName, unhighlight, false);
// });
// // Handle dropped files
// dropArea.addEventListener("drop", handleDrop, false);
// dropArea.addEventListener("click", () => fileInput.click());
// // Handle file selection
// fileInput.addEventListener("change", () => {
// const files = fileInput.files;
// handleFiles(files);
// }, false);
// // Handle pasted files
// document.addEventListener("paste", handlePaste, false);
// function preventDefaults(e) {
// e.preventDefault();
// e.stopPropagation();
// }
// function highlight() {
// dropArea.classList.add("highlight");
// }
// function unhighlight() {
// dropArea.classList.remove("highlight");
// }
// function handleDrop(e) {
// const dt = e.dataTransfer;
// const files = dt.files;
// handleFiles(files);
// }
// function handlePaste(e) {
// const items = e.clipboardData.items;
// for (let i = 0; i < items.length; i++) {
// const item = items[i];
// if (item.kind === "file") {
// const file = item.getAsFile();
// handleFiles([file]);
// }
// }
// }
// function handleFiles(files) {
// if (files.length > 0) {
// uploadFile(files[0]);
// }
// }
// function uploadFile(file) {
// const url = "upload"; // Replace with your upload URL
// const xhr = new XMLHttpRequest();
// // Update progress bar
// xhr.upload.addEventListener("progress", (e) => {
// if (e.lengthComputable) {
// const percentComplete = (e.loaded / e.total) * 100;
// progressBar.style.width = percentComplete + "%"; // Set the width of the progress bar
// progressContainer.style.display = "block"; // Show progress container
// }
// });
// // Handle response
// xhr.onload = () => {
// if (xhr.status === 200) {
// try {
// const response = JSON.parse(xhr.responseText);
// const fileLink = response.link; // Assuming the response contains a key 'link'
// status.innerHTML = `<a href="${fileLink}" target="_blank">File uploaded successfully! Click here to view the file</a>`;
// } catch (error) {
// status.textContent = "File uploaded but failed to parse response.";
// }
// } else {
// status.textContent = "File upload failed.";
// }
// progressBar.style.width = "0"; // Reset progress bar
// progressContainer.style.display = "none"; // Hide progress container
// };
// // Handle errors
// xhr.onerror = () => {
// status.textContent = "An error occurred during the file upload.";
// progressBar.style.width = "0"; // Reset progress bar
// progressContainer.style.display = "none"; // Hide progress container
// };
// // Send file
// const formData = new FormData();
// formData.append("file", file);
// xhr.open("POST", url, true);
// xhr.send(formData);
// }
// });
function handleFiles(input) {
const files = input.files;
Array.from(files).forEach(file => {
// Display download link initially
document.querySelector(`#link-${file.name}`).textContent = "Uploading...";
// Create a new FormData instance
let formData = new FormData();
formData.append('file', file);
// Simulate a request to the server
fetch('/upload', { method: 'POST', body: formData })
.then(response => response.json())
.then(data => {
// Update the progress bar
document.querySelector(`#progress-${file.name}`).style.width = `${data.progress}%`;
// Display the download link
document.querySelector(`#link-${file.name}`).textContent = data.link;
})
.catch(error => console.error('Error:', error));
});
}
// Handle drag & drop
document.addEventListener('dragover', function(event) {
event.preventDefault();
event.stopPropagation();
});
document.addEventListener('drop', function(event) {
event.preventDefault();
event.stopPropagation();
const files = event.dataTransfer.files;
handleFiles(files);
}, false);
// Handle clipboard paste
document.addEventListener('paste', function(event) {
event.preventDefault();
event.stopPropagation();
const items = event.clipboardData.items;
if (items.length > 0 && items[0].type.indexOf("text") !== -1) {
const file = items[0].getAsFile();
handleFiles([file]);
}
}, false);

View file

@ -3,6 +3,8 @@ require "yaml"
class Config
include YAML::Serializable
# Colorize logs
property colorize_logs : Bool = true
# Where the uploaded files will be located
property files : String = "./files"
# Where the thumbnails will be located when they are successfully generated
@ -12,8 +14,6 @@ class Config
property generateThumbnails : Bool = false
# Where the SQLITE3 database will be located
property db : String = "./db.sqlite3"
# Name of the table that will be used for file information
property dbTableName : String = "files"
# Enable or disable the admin API
property adminEnabled : Bool = false
# The API key for admin routes. It's passed as a "X-Api-Key" header to the
@ -25,8 +25,10 @@ class Config
property fileameLength : Int32 = 3
# In MiB
property size_limit : Int16 = 512
# TCP port
# Port on which the uploader will bind
property port : Int32 = 8080
# IP address on which the uploader will bind
property host : String = "127.0.0.1"
# A file path where do you want to place a unix socket (THIS WILL DISABLE ACCESS
# BY IP ADDRESS)
property unix_socket : String?
@ -45,8 +47,6 @@ class Config
property torMessage : String? = "Tor is blocked!"
# How many files an IP address can upload to the server
property filesPerIP : Int32 = 32
# Name of the table that will be used for rate limit information
property ipTableName : String = "ips"
# How often is the file limit per IP reset? (in seconds)
property rateLimitPeriod : Int32 = 600
# TODO: UNUSED CONSTANT
@ -87,5 +87,12 @@ class Config
puts "Config: fileameLength cannot be #{config.fileameLength}"
exit(1)
end
if config.files.ends_with?('/')
config.files = config.files.chomp('/')
end
if config.thumbnails.ends_with?('/')
config.thumbnails = config.thumbnails.chomp('/')
end
end
end

View file

@ -1,56 +1,94 @@
require "http"
require "kemal"
require "yaml"
require "db"
require "sqlite3"
require "digest"
require "./logger"
require "./routing"
require "./utils"
require "./handling/**"
require "./config"
require "./jobs"
require "./lib/**"
CONFIG = Config.load
Kemal.config.port = CONFIG.port
Kemal.config.port = 9999
Kemal.config.host_binding = "0.0.0.0"
Kemal.config.shutdown_message = false
Kemal.config.app_name = "file-uploader-crystal"
# https://github.com/iv-org/invidious/blob/90e94d4e6cc126a8b7a091d12d7a5556bfe369d5/src/invidious.cr#L136C1-L136C61
LOGGER = LogHandler.new(STDOUT, CONFIG.log_level)
# Give me a 128 bit CPU
# MAX_FILES = 58**CONFIG.fileameLength
SQL = DB.open("sqlite3://#{CONFIG.db}")
# https://github.com/iv-org/invidious/blob/90e94d4e6cc126a8b7a091d12d7a5556bfe369d5/src/invidious.cr#L78
CURRENT_BRANCH = {{ "#{`git branch | sed -n '/* /s///p'`.strip}" }}
CURRENT_COMMIT = {{ "#{`git rev-list HEAD --max-count=1 --abbrev-commit`.strip}" }}
CURRENT_VERSION = {{ "#{`git log -1 --format=%ci | awk '{print $1}' | sed s/-/./g`.strip}" }}
CURRENT_TAG = {{ "#{`git describe --long --abbrev=7 --tags | sed 's/([^-]*)-g.*/r\1/;s/-/./g'`.strip}" }}
YTIMG_POOLS = {} of String => YoutubeConnectionPool
struct YoutubeConnectionPool
property! url : URI
property! capacity : Int32
property! timeout : Float64
property pool : DB::Pool(HTTP::Client)
def initialize(url : URI, @capacity = 5, @timeout = 5.0)
@url = url
@pool = build_pool()
end
def size()
return @pool.size
end
def client(&)
conn = pool.checkout
begin
response = yield conn
rescue ex
puts "CLOSING CON: #{ex.message} + #{ex.inspect}"
conn.close
conn = make_client(url, force_resolve: true)
response = yield conn
ensure
pool.release(conn)
end
response
end
def get_pool()
return @pool
end
private def build_pool
options = DB::Pool::Options.new(
initial_pool_size: 0,
max_pool_size: capacity,
max_idle_pool_size: capacity,
checkout_timeout: timeout
)
DB::Pool(HTTP::Client).new(options) do
next make_client(url, force_resolve: true)
end
end
end
def make_client(url : URI, region = nil, force_resolve : Bool = false, force_youtube_headers : Bool = false, use_http_proxy : Bool = true)
client = HTTP::Client.new(url)
client.read_timeout = 10.seconds
client.connect_timeout = 10.seconds
return client
end
# Fetches a HTTP pool for the specified subdomain of ytimg.com
#
# Creates a new one when the specified pool for the subdomain does not exist
def get_ytimg_pool(subdomain)
if pool = YTIMG_POOLS[subdomain]?
return pool
else
puts "ytimg_pool: Creating a new HTTP pool for \"https://#{subdomain}.ytimg.com\""
pool = YoutubeConnectionPool.new(URI.parse("https://#{subdomain}.ytimg.com"), capacity: ENV.fetch("POOL_SIZE", 100).to_i)
YTIMG_POOLS[subdomain] = pool
return pool
end
end
Utils.check_dependencies
Utils.create_db
Utils.create_files_dir
Utils.create_thumbnails_dir
Routing.register_all
Utils.delete_socket
Jobs.run
{% if flag?(:release) || flag?(:production) %}
Kemal.config.env = "production" if !ENV.has_key?("KEMAL_ENV")
{% end %}
if !CONFIG.unix_socket.nil?
sleep 1.second
LOGGER.info "Changing socket permissions to 777"
begin
File.chmod("#{CONFIG.unix_socket}", File::Permissions::All)
rescue ex
LOGGER.fatal "#{ex.message}"
exit(1)
end
end
sleep
Kemal.run

View file

@ -1,174 +0,0 @@
require "../http-errors"
module Handling::Admin
extend self
# private macro json_fill(named_tuple, field_name)
# j.field {{field_name}}, {{named_tuple}}[:{{field_name}}]
# end
# /api/admin/delete
# curl -X POST -H "Content-Type: application/json" -H "X-Api-Key: asd" http://localhost:8080/api/admin/delete -d '{"files": ["j63"]}' | jq
def delete_file(env)
files = env.params.json["files"].as((Array(JSON::Any)))
successfull_files = [] of String
failed_files = [] of String
files.each do |file|
file = file.to_s
begin
fileinfo = SQL.query_one("SELECT filename, extension, thumbnail
FROM #{CONFIG.dbTableName}
WHERE filename = ?",
file,
as: {filename: String, extension: String, thumbnail: String | Nil})
# Delete file
File.delete("#{CONFIG.files}/#{fileinfo[:filename]}#{fileinfo[:extension]}")
if fileinfo[:thumbnail]
# Delete thumbnail
File.delete("#{CONFIG.thumbnails}/#{fileinfo[:thumbnail]}")
end
# Delete entry from db
SQL.exec "DELETE FROM #{CONFIG.dbTableName} WHERE filename = ?", file
LOGGER.debug "File '#{fileinfo[:filename]}' was deleted"
successfull_files << file
rescue ex : DB::NoResultsError
LOGGER.error("File '#{file}' doesn't exist or is not registered in the database: #{ex.message}")
failed_files << file
rescue ex
LOGGER.error "Unknown error: #{ex.message}"
error500 "Unknown error: #{ex.message}"
end
end
json = JSON.build do |j|
j.object do
j.field "successfull", successfull_files.size
j.field "failed", failed_files.size
j.field "successfullFiles", successfull_files
j.field "failedFiles", failed_files
end
end
end
# /api/admin/deleteiplimit
# curl -X POST -H "Content-Type: application/json" -H "X-Api-Key: asd" http://localhost:8080/api/admin/deleteiplimit -d '{"ips": ["127.0.0.1"]}' | jq
def delete_ip_limit(env)
data = env.params.json["ips"].as((Array(JSON::Any)))
successfull = [] of String
failed = [] of String
data.each do |item|
item = item.to_s
begin
# Delete entry from db
SQL.exec "DELETE FROM #{CONFIG.ipTableName} WHERE ip = ?", item
LOGGER.debug "Rate limit for '#{item}' was deleted"
successfull << item
rescue ex : DB::NoResultsError
LOGGER.error("Rate limit for '#{item}' doesn't exist or is not registered in the database: #{ex.message}")
failed << item
rescue ex
LOGGER.error "Unknown error: #{ex.message}"
error500 "Unknown error: #{ex.message}"
end
end
json = JSON.build do |j|
j.object do
j.field "successfull", successfull.size
j.field "failed", failed.size
j.field "successfullUnbans", successfull
j.field "failedUnbans", failed
end
end
end
# /api/admin/fileinfo
# curl -X POST -H "Content-Type: application/json" -H "X-Api-Key: asd" http://localhost:8080/api/admin/fileinfo -d '{"files": ["j63"]}' | jq
def retrieve_file_info(env)
data = env.params.json["files"].as((Array(JSON::Any)))
successfull = [] of NamedTuple(original_filename: String, filename: String, extension: String,
uploaded_at: String, checksum: String, ip: String, delete_key: String,
thumbnail: String | Nil)
failed = [] of String
data.each do |item|
item = item.to_s
begin
fileinfo = SQL.query_one("SELECT original_filename, filename, extension,
uploaded_at, checksum, ip, delete_key, thumbnail
FROM #{CONFIG.dbTableName}
WHERE filename = ?",
item,
as: {original_filename: String, filename: String, extension: String,
uploaded_at: String, checksum: String, ip: String, delete_key: String,
thumbnail: String | Nil})
successfull << fileinfo
rescue ex : DB::NoResultsError
LOGGER.error("File '#{item}' is not registered in the database: #{ex.message}")
failed << item
rescue ex
LOGGER.error "Unknown error: #{ex.message}"
error500 "Unknown error: #{ex.message}"
end
end
json = JSON.build do |j|
j.object do
j.field "files" do
j.array do
successfull.each do |fileinfo|
j.object do
j.field fileinfo[:filename] do
j.object do
j.field "original_filename", fileinfo[:original_filename]
j.field "filename", fileinfo[:filename]
j.field "extension", fileinfo[:extension]
j.field "uploaded_at", fileinfo[:uploaded_at]
j.field "checksum", fileinfo[:checksum]
j.field "ip", fileinfo[:ip]
j.field "delete_key", fileinfo[:delete_key]
j.field "thumbnail", fileinfo[:thumbnail]
end
end
end
end
end
end
j.field "successfull", successfull.size
j.field "failed", failed.size
# j.field "successfullFiles"
j.field "failedFiles", failed
end
end
end
# /api/admin/torexitnodes
# curl -X GET -H "X-Api-Key: asd" http://localhost:8080/api/admin/torexitnodes | jq
def retrieve_tor_exit_nodes(env, nodes)
json = JSON.build do |j|
j.object do
j.field "ips", nodes
end
end
end
# /api/admin/whitelist
# curl -X GET -H "X-Api-Key: asd" http://localhost:8080/api/admin/torexitnodes | jq
# def add_ip_to_whitelist(env, nodes)
# json = JSON.build do |j|
# j.object do
# j.field "ips", nodes
# end
# end
# end
# /api/admin/blacklist
# curl -X GET -H "X-Api-Key: asd" http://localhost:8080/api/admin/torexitnodes | jq
def add_ip_to_blacklist(env, nodes)
json = JSON.build do |j|
j.object do
j.field "ips", nodes
end
end
end
# MODULE END
end

View file

@ -1,386 +0,0 @@
require "../http-errors"
require "http/client"
require "benchmark"
module Handling
extend self
def upload(env)
env.response.content_type = "application/json"
ip_address = Utils.ip_address(env)
protocol = Utils.protocol(env)
host = Utils.host(env)
# You can modify this if you want to allow files smaller than 1MiB.
# This is generally a good way to check the filesize but there is a better way to do it
# which is inspecting the file directly (If I'm not wrong).
if CONFIG.size_limit > 0
if env.request.headers["Content-Length"].to_i > 1048576*CONFIG.size_limit
return error413("File is too big. The maximum size allowed is #{CONFIG.size_limit}MiB")
end
end
filename = ""
extension = ""
original_filename = ""
uploaded_at = ""
checksum = ""
if CONFIG.deleteKeyLength > 0
delete_key = Random.base58(CONFIG.deleteKeyLength)
end
# TODO: Return the file that matches a checksum inside the database
HTTP::FormData.parse(env.request) do |upload|
if upload.filename.nil? || upload.filename.to_s.empty?
LOGGER.debug "No file provided by the user"
return error403("No file provided")
end
# TODO: upload.body is emptied when is copied or read
# Utils.check_duplicate(upload.dup)
extension = File.extname("#{upload.filename}")
if CONFIG.blockedExtensions.includes?(extension.split(".")[1])
return error401("Extension '#{extension}' is not allowed")
end
filename = Utils.generate_filename
file_path = ::File.join ["#{CONFIG.files}", filename + extension]
File.open(file_path, "w") do |output|
IO.copy(upload.body, output)
end
original_filename = upload.filename
uploaded_at = Time.utc
checksum = Utils.hash_file(file_path)
end
# X-Forwarded-For if behind a reverse proxy and the header is set in the reverse
# proxy configuration.
begin
spawn { Utils.generate_thumbnail(filename, extension) }
rescue ex
LOGGER.error "An error ocurred when trying to generate a thumbnail: #{ex.message}"
end
begin
# Insert SQL data just before returning the upload information
SQL.exec "INSERT INTO #{CONFIG.dbTableName} VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
original_filename, filename, extension, uploaded_at, checksum, ip_address, delete_key, nil
SQL.exec "INSERT OR IGNORE INTO #{CONFIG.ipTableName} (ip, date) VALUES (?, ?)", ip_address, Time.utc.to_unix
# SQL.exec "INSERT OR IGNORE INTO #{CONFIG.ipTableName} (ip) VALUES ('#{ip_address}')"
SQL.exec "UPDATE #{CONFIG.ipTableName} SET count = count + 1 WHERE ip = ('#{ip_address}')"
rescue ex
LOGGER.error "An error ocurred when trying to insert the data into the DB: #{ex.message}"
return error500("An error ocurred when trying to insert the data into the DB")
end
json = JSON.build do |j|
j.object do
j.field "link", "#{protocol}://#{host}/#{filename}"
j.field "linkExt", "#{protocol}://#{host}/#{filename}#{extension}"
j.field "id", filename
j.field "ext", extension
j.field "name", original_filename
j.field "checksum", checksum
if CONFIG.deleteKeyLength > 0
j.field "deleteKey", delete_key
j.field "deleteLink", "#{protocol}://#{host}/delete?key=#{delete_key}"
end
end
end
json
end
# The most unoptimized and unstable feature lol
def upload_url_bulk(env)
env.response.content_type = "application/json"
ip_address = Utils.ip_address(env)
protocol = Utils.protocol(env)
host = Utils.host(env)
files = env.params.json["files"].as((Array(JSON::Any)))
successfull_files = [] of NamedTuple(filename: String, extension: String, original_filename: String, checksum: String, delete_key: String | Nil)
failed_files = [] of String
# X-Forwarded-For if behind a reverse proxy and the header is set in the reverse
# proxy configuration.
if files.empty?
end
files.each do |url|
url = url.to_s
filename = Utils.generate_filename
original_filename = ""
extension = ""
checksum = ""
uploaded_at = Time.utc
extension = File.extname(URI.parse(url).path)
delete_key = nil
file_path = ::File.join ["#{CONFIG.files}", filename + extension]
File.open(file_path, "w") do |output|
begin
HTTP::Client.get(url) do |res|
IO.copy(res.body_io, output)
end
rescue ex
LOGGER.debug "Failed to download file '#{url}': #{ex.message}"
return error403("Failed to download file '#{url}'")
failed_files << url
end
end
# successfull_files << url
# end
if extension.empty?
extension = Utils.detect_extension(file_path)
File.rename(file_path, file_path + extension)
file_path = ::File.join ["#{CONFIG.files}", filename + extension]
end
# The second one is faster and it uses less memory
# original_filename = URI.parse("https://ayaya.beauty/PqC").path.split("/").last
original_filename = url.split("/").last
checksum = Utils.hash_file(file_path)
begin
spawn { Utils.generate_thumbnail(filename, extension) }
rescue ex
LOGGER.error "An error ocurred when trying to generate a thumbnail: #{ex.message}"
end
begin
# Insert SQL data just before returning the upload information
SQL.exec("INSERT INTO #{CONFIG.dbTableName} VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
original_filename, filename, extension, uploaded_at, checksum, ip_address, delete_key, nil)
successfull_files << {filename: filename,
original_filename: original_filename,
extension: extension,
delete_key: delete_key,
checksum: checksum}
rescue ex
LOGGER.error "An error ocurred when trying to insert the data into the DB: #{ex.message}"
return error500("An error ocurred when trying to insert the data into the DB")
end
end
json = JSON.build do |j|
j.array do
successfull_files.each do |fileinfo|
j.object do
j.field "link", "#{protocol}://#{host}/#{fileinfo[:filename]}"
j.field "linkExt", "#{protocol}://#{host}/#{fileinfo[:filename]}#{fileinfo[:extension]}"
j.field "id", fileinfo[:filename]
j.field "ext", fileinfo[:extension]
j.field "name", fileinfo[:original_filename]
j.field "checksum", fileinfo[:checksum]
if CONFIG.deleteKeyLength > 0
delete_key = Random.base58(CONFIG.deleteKeyLength)
j.field "deleteKey", fileinfo[:delete_key]
j.field "deleteLink", "#{protocol}://#{host}/delete?key=#{fileinfo[:delete_key]}"
end
end
end
end
end
json
end
# TODO: Add delete url, same for upload_url_bulk
def upload_url(env)
env.response.content_type = "application/json"
ip_address = Utils.ip_address(env)
protocol = Utils.protocol(env)
host = Utils.host(env)
url = env.params.query["url"]
successfull_files = [] of NamedTuple(filename: String, extension: String, original_filename: String, checksum: String, delete_key: String | Nil)
failed_files = [] of String
# X-Forwarded-For if behind a reverse proxy and the header is set in the reverse
# proxy configuration.
if url.empty?
end
# files.each do |url|
url = url.to_s
filename = Utils.generate_filename
original_filename = ""
extension = ""
checksum = ""
uploaded_at = Time.utc
extension = File.extname(URI.parse(url).path)
delete_key = nil
file_path = ::File.join ["#{CONFIG.files}", filename + extension]
File.open(file_path, "w") do |output|
begin
HTTP::Client.get(url) do |res|
IO.copy(res.body_io, output)
end
rescue ex
LOGGER.debug "Failed to download file '#{url}': #{ex.message}"
return error403("Failed to download file '#{url}'")
failed_files << url
end
end
# successfull_files << url
# end
if extension.empty?
extension = Utils.detect_extension(file_path)
File.rename(file_path, file_path + extension)
file_path = ::File.join ["#{CONFIG.files}", filename + extension]
end
# The second one is faster and it uses less memory
# original_filename = URI.parse("https://ayaya.beauty/PqC").path.split("/").last
original_filename = url.split("/").last
checksum = Utils.hash_file(file_path)
begin
spawn { Utils.generate_thumbnail(filename, extension) }
rescue ex
LOGGER.error "An error ocurred when trying to generate a thumbnail: #{ex.message}"
end
begin
# Insert SQL data just before returning the upload information
SQL.exec("INSERT INTO #{CONFIG.dbTableName} VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
original_filename, filename, extension, uploaded_at, checksum, ip_address, delete_key, nil)
successfull_files << {filename: filename,
original_filename: original_filename,
extension: extension,
delete_key: delete_key,
checksum: checksum}
rescue ex
LOGGER.error "An error ocurred when trying to insert the data into the DB: #{ex.message}"
return error500("An error ocurred when trying to insert the data into the DB")
end
# end
json = JSON.build do |j|
j.array do
successfull_files.each do |fileinfo|
j.object do
j.field "link", "#{protocol}://#{host}/#{fileinfo[:filename]}"
j.field "linkExt", "#{protocol}://#{host}/#{fileinfo[:filename]}#{fileinfo[:extension]}"
j.field "id", fileinfo[:filename]
j.field "ext", fileinfo[:extension]
j.field "name", fileinfo[:original_filename]
j.field "checksum", fileinfo[:checksum]
if CONFIG.deleteKeyLength > 0
delete_key = Random.base58(CONFIG.deleteKeyLength)
j.field "deleteKey", fileinfo[:delete_key]
j.field "deleteLink", "#{protocol}://#{host}/delete?key=#{fileinfo[:delete_key]}"
end
end
end
end
end
json
end
def retrieve_file(env)
begin
protocol = Utils.protocol(env)
host = Utils.host(env)
fileinfo = SQL.query_all("SELECT filename, original_filename, uploaded_at, extension, checksum, thumbnail
FROM #{CONFIG.dbTableName}
WHERE filename = ?",
env.params.url["filename"].split(".").first,
as: {filename: String, ofilename: String, up_at: String, ext: String, checksum: String, thumbnail: String | Nil})[0]
# Benchmark.ips do |x|
# x.report("header multiple") { headers(env, {"Content-Disposition" => "inline; filename*=UTF-8''#{fileinfo[:ofilename]}",
# "Last-Modified" => "#{fileinfo[:up_at]}",
# "ETag" => "#{fileinfo[:checksum]}"}) }
# x.report("shorter sleep") do
# env.response.headers["Content-Disposition"] = "inline; filename*=UTF-8''#{fileinfo[:ofilename]}"
# env.response.headers["Last-Modified"] = "#{fileinfo[:up_at]}"
# env.response.headers["ETag"] = "#{fileinfo[:checksum]}"
# end
# end
# `env.response.headers` is faster than `headers(env, Hash(String, String))`
# https://github.com/kemalcr/kemal/blob/3243b8e0e03568ad3bd9f0ad6f445c871605b821/src/kemal/helpers/helpers.cr#L102C1-L104C4
env.response.headers["Content-Disposition"] = "inline; filename*=UTF-8''#{fileinfo[:ofilename]}"
# env.response.headers["Last-Modified"] = "#{fileinfo[:up_at]}"
env.response.headers["ETag"] = "#{fileinfo[:checksum]}"
CONFIG.opengraphUseragents.each do |useragent|
if env.request.headers.try &.["User-Agent"].includes?(useragent)
env.response.content_type = "text/html"
return %(
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta property="og:title" content="#{fileinfo[:ofilename]}">
<meta property="og:url" content="#{protocol}://#{host}/#{fileinfo[:filename]}">
#{if fileinfo[:thumbnail]
%(<meta property="og:image" content="#{protocol}://#{host}/thumbnail/#{fileinfo[:filename]}.jpg">)
end}
</head>
</html>
)
end
end
send_file env, "#{CONFIG.files}/#{fileinfo[:filename]}#{fileinfo[:ext]}"
rescue ex
LOGGER.debug "File '#{env.params.url["filename"]}' does not exist: #{ex.message}"
return error403("File '#{env.params.url["filename"]}' does not exist")
end
end
def retrieve_thumbnail(env)
begin
send_file env, "#{CONFIG.thumbnails}/#{env.params.url["thumbnail"]}"
rescue ex
LOGGER.debug "Thumbnail '#{env.params.url["thumbnail"]}' does not exist: #{ex.message}"
return error403("Thumbnail '#{env.params.url["thumbnail"]}' does not exist")
end
end
def stats(env)
env.response.content_type = "application/json"
begin
json_data = JSON.build do |json|
json.object do
json.field "stats" do
json.object do
json.field "filesHosted", SQL.query_one "SELECT COUNT (filename) FROM #{CONFIG.dbTableName}", as: Int32
json.field "maxUploadSize", CONFIG.size_limit
json.field "thumbnailGeneration", CONFIG.generateThumbnails
json.field "filenameLength", CONFIG.fileameLength
json.field "alternativeDomains", CONFIG.alternativeDomains
end
end
end
end
rescue ex
LOGGER.error "Unknown error: #{ex.message}"
return error500("Unknown error")
end
json_data
end
def delete_file(env)
if SQL.query_one "SELECT EXISTS(SELECT 1 FROM #{CONFIG.dbTableName} WHERE delete_key = ?)", env.params.query["key"], as: Bool
begin
fileinfo = SQL.query_all("SELECT filename, extension, thumbnail
FROM #{CONFIG.dbTableName}
WHERE delete_key = ?",
env.params.query["key"],
as: {filename: String, extension: String, thumbnail: String | Nil})[0]
# Delete file
File.delete("#{CONFIG.files}/#{fileinfo[:filename]}#{fileinfo[:extension]}")
if fileinfo[:thumbnail]
# Delete thumbnail
File.delete("#{CONFIG.thumbnails}/#{fileinfo[:thumbnail]}")
end
# Delete entry from db
SQL.exec "DELETE FROM #{CONFIG.dbTableName} WHERE delete_key = ?", env.params.query["key"]
LOGGER.debug "File '#{fileinfo[:filename]}' was deleted using key '#{env.params.query["key"]}'}"
return msg("File '#{fileinfo[:filename]}' deleted successfully")
rescue ex
LOGGER.error("Unknown error: #{ex.message}")
return error500("Unknown error")
end
else
LOGGER.debug "Key '#{env.params.query["key"]}' does not exist"
return error401("Delete key '#{env.params.query["key"]}' does not exist. No files were deleted")
end
end
def sharex_config(env)
host = Utils.host(env)
protocol = Utils.protocol(env)
env.response.content_type = "application/json"
# So it's able to download the file instead of displaying it
env.response.headers["Content-Disposition"] = "attachment; filename=\"#{host}.sxcu\""
return %({
"Version": "14.0.1",
"DestinationType": "ImageUploader, FileUploader",
"RequestMethod": "POST",
"RequestURL": "#{protocol}://#{host}/upload",
"Body": "MultipartFormData",
"FileFormName": "file",
"URL": "{json:link}",
"DeletionURL": "{json:deleteLink}",
"ErrorMessage": "{json:error}"
})
end
end

View file

@ -1,40 +0,0 @@
macro error401(message)
env.response.content_type = "application/json"
env.response.status_code = 401
error_message = {"error" => {{message}}}.to_json
error_message
end
macro error403(message)
env.response.content_type = "application/json"
env.response.status_code = 403
error_message = {"error" => {{message}}}.to_json
error_message
end
macro error404(message)
env.response.content_type = "application/json"
env.response.status_code = 404
error_message = {"error" => {{message}}}.to_json
error_message
end
macro error413(message)
env.response.content_type = "application/json"
env.response.status_code = 413
error_message = {"error" => {{message}}}.to_json
error_message
end
macro error500(message)
env.response.content_type = "application/json"
env.response.status_code = 500
error_message = {"error" => {{message}}}.to_json
error_message
end
macro msg(message)
env.response.content_type = "application/json"
msg = {"message" => {{message}}}.to_json
msg
end

View file

@ -1,45 +0,0 @@
# Pretty cool way to write background jobs! :)
module Jobs
def self.check_old_files
if CONFIG.deleteFilesCheck <= 0
LOGGER.info "File deletion is disabled"
return
end
spawn do
loop do
Utils.check_old_files
sleep CONFIG.deleteFilesCheck
end
end
end
def self.retrieve_tor_exit_nodes
if !CONFIG.blockTorAddresses
return
end
spawn do
loop do
Utils.retrieve_tor_exit_nodes
# Updates the @@exit_nodes array instantly
Routing.reload_exit_nodes
sleep CONFIG.torExitNodesCheck
end
end
end
def self.kemal
spawn do
if !CONFIG.unix_socket.nil?
Kemal.run &.server.not_nil!.bind_unix "#{CONFIG.unix_socket}"
else
Kemal.run
end
end
end
def self.run
check_old_files
retrieve_tor_exit_nodes
kemal
end
end

View file

@ -1,36 +0,0 @@
# https://github.com/crystal-china/base58.cr/blob/main/src/base58.cr
require "random"
module Random
# Base58 string may contain alphanumeric characters except 0, O, I and l.
# ("0".."9").to_a + ("A".."Z").to_a + ("a".."z").to_a - ["0", "O", "I", "l"]
BASE58_ALPHABET = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz"
def self.base58(length : Int32 = 16, random = Random::DEFAULT) : String
# Stolen from https://forum.crystal-lang.org/t/is-this-a-good-way-to-generate-a-random-string/6986/11,
# thank a lot for these awesome discussions in this thread.
if length <= 1024
buffer = uninitialized UInt8[1024]
bytes = buffer.to_slice[0...length]
else
bytes = Bytes.new(length)
end
# then all valid indices are in [0,63], so just get a bunch of bytes
# and divide until they're guaranteed to be small enough
# (this seems to be about as fast as a right shift; the compiler probably optimizes it)
random.random_bytes(bytes)
bytes.map! { |v| v % BASE58_ALPHABET.bytesize }
# and then use the buffer-based string constructor to set the characters
String.new(capacity: length) do |buffer|
bytes.each_with_index do |chars_index, buffer_index|
buffer[buffer_index] = BASE58_ALPHABET.byte_at(chars_index)
end
# return size and bytesize (might differ if chars included non-ASCII)
{length, length}
end
end
end

View file

@ -1,70 +0,0 @@
# https://github.com/iv-org/invidious/blob/master/src/invidious/helpers/logger.cr
enum LogLevel
All = 0
Trace = 1
Debug = 2
Info = 3
Warn = 4
Error = 5
Fatal = 6
Off = 7
end
class LogHandler < Kemal::BaseLogHandler
def initialize(@io : IO = STDOUT, @level = LogLevel::Debug)
end
def call(context : HTTP::Server::Context)
elapsed_time = Time.measure { call_next(context) }
elapsed_text = elapsed_text(elapsed_time)
# Default: full path with parameters
requested_url = context.request.resource
# Try not to log search queries passed as GET parameters during normal use
# (They will still be logged if log level is 'Debug' or 'Trace')
if @level > LogLevel::Debug && (
requested_url.downcase.includes?("search") || requested_url.downcase.includes?("q=")
)
# Log only the path
requested_url = context.request.path
end
info("#{context.response.status_code} #{context.request.method} #{requested_url} #{elapsed_text}")
context
end
def puts(message : String)
@io << message << '\n'
@io.flush
end
def write(message : String)
@io << message
@io.flush
end
def set_log_level(level : String)
@level = LogLevel.parse(level)
end
def set_log_level(level : LogLevel)
@level = level
end
{% for level in %w(trace debug info warn error fatal) %}
def {{level.id}}(message : String)
if LogLevel::{{level.id.capitalize}} >= @level
puts("#{Time.utc} [{{level.id}}] #{message}")
end
end
{% end %}
private def elapsed_text(elapsed)
millis = elapsed.total_milliseconds
return "#{millis.round(2)}ms" if millis >= 1
"#{(millis * 1000).round(2)}µs"
end
end

View file

@ -1,110 +1,134 @@
require "./http-errors"
require "json"
HOST_URL = "inv.nadeko.cl"
RESPONSE_HEADERS_BLACKLIST = {"access-control-allow-origin", "alt-svc", "server"}
REQUEST_HEADERS_WHITELIST = {"accept", "accept-encoding", "cache-control", "content-length", "if-none-match", "range"}
module Routing
extend self
@@exit_nodes = Array(String).new
def reload_exit_nodes
LOGGER.debug "Updating Tor exit nodes array"
@@exit_nodes = Utils.load_tor_exit_nodes
LOGGER.debug "IPs inside the exit nodes array: #{@@exit_nodes.size}"
end
before_post "/api/admin/*" do |env|
if env.request.headers.try &.["X-Api-Key"]? != CONFIG.adminApiKey || nil
halt env, status_code: 401, response: error401("Wrong API Key")
def self.proxy_file(response, env)
if response.headers.includes_word?("Content-Encoding", "gzip")
Compress::Gzip::Writer.open(env.response) do |deflate|
IO.copy response.body_io, deflate
end
elsif response.headers.includes_word?("Content-Encoding", "deflate")
Compress::Deflate::Writer.open(env.response) do |deflate|
IO.copy response.body_io, deflate
end
else
IO.copy response.body_io, env.response
end
end
before_post do |env|
if env.request.headers.try &.["X-Api-Key"]? == CONFIG.adminApiKey
# Skips Tor and Rate limits if the API key matches
next
end
if CONFIG.blockTorAddresses && @@exit_nodes.includes?(Utils.ip_address(env))
halt env, status_code: 401, response: error401(CONFIG.torMessage)
end
# There is a better way to do this
if env.request.resource == "/upload"
begin
ip_info = SQL.query_all("SELECT ip, count, date FROM #{CONFIG.ipTableName} WHERE ip = ?", Utils.ip_address(env), as: {ip: String, count: Int32, date: Int32})[0]
time_since_first_upload = Time.utc.to_unix - ip_info[:date]
time_until_unban = ip_info[:date] - Time.utc.to_unix + CONFIG.rateLimitPeriod
if time_since_first_upload > CONFIG.rateLimitPeriod
SQL.exec "DELETE FROM #{CONFIG.ipTableName} WHERE ip = ?", ip_info[:ip]
end
if ip_info[:count] >= CONFIG.filesPerIP && time_since_first_upload < CONFIG.rateLimitPeriod
halt env, status_code: 401, response: error401("Rate limited! Try again in #{time_until_unban} seconds")
end
rescue ex
LOGGER.error "Error when trying to enforce rate limits: #{ex.message}"
next
private def self.proxy_image(env, response)
env.response.status_code = response.status_code
response.headers.each do |key, value|
if !RESPONSE_HEADERS_BLACKLIST.includes?(key.downcase)
env.response.headers[key] = value
end
end
env.response.headers["Access-Control-Allow-Origin"] = "*"
if response.status_code >= 300
return env.response.headers.delete("Transfer-Encoding")
end
return proxy_file(response, env)
end
def register_all
get "/" do |env|
host = Utils.host(env)
files_hosted = SQL.query_one "SELECT COUNT (filename) FROM #{CONFIG.dbTableName}", as: Int32
render "src/views/index.ecr"
get "/vi/:id/:name" do |env|
thumbnails(env)
end
post "/upload" do |env|
Handling.upload(env)
get "/debug" do |env|
debug(env)
end
get "/upload" do |env|
Handling.upload_url(env)
get "/debug2" do |env|
debug(env)
end
post "/api/uploadurl" do |env|
Handling.upload_url_bulk(env)
end
get "/:filename" do |env|
Handling.retrieve_file(env)
end
get "/thumbnail/:thumbnail" do |env|
Handling.retrieve_thumbnail(env)
end
get "/delete" do |env|
Handling.delete_file(env)
end
get "/api/stats" do |env|
Handling.stats(env)
end
get "/sharex.sxcu" do |env|
Handling.sharex_config(env)
end
if CONFIG.adminEnabled
self.register_admin
get "/test" do |env|
"test"
end
end
def register_admin
# post "/api/admin/upload" do |env|
# Handling::Admin.delete_ip_limit(env)
def self.debug(env)
env.response.content_type = "application/json"
meow2 = [] of String
pool_info = YTIMG_POOLS["i"].get_pool.stats
pp pool_info
xd = JSON.build do |z|
z.object do
z.field "pool_info" do
z.object do
z.field "pool_idle_conn", pool_info.idle_connections
z.field "pool_max_conn", pool_info.in_flight_connections
z.field "pool_in_fligth_conn", pool_info.max_connections
z.field "pool_open_conn", pool_info.open_connections
end
end
z.field "pool_capacity", YTIMG_POOLS["i"].inspect
end
end
return xd
end
XD_CLIENT = HTTP::Client.new("i.ytimg.com")
def self.thumbnails(env)
id = env.params.url["id"]
name = env.params.url["name"]
headers = HTTP::Headers.new
xd = {
{host: HOST_URL, height: 720, width: 1280, name: "maxres", url: "maxres"},
{host: HOST_URL, height: 720, width: 1280, name: "maxresdefault", url: "maxresdefault"},
{host: HOST_URL, height: 480, width: 640, name: "sddefault", url: "sddefault"},
{host: HOST_URL, height: 360, width: 480, name: "high", url: "hqdefault"},
{host: HOST_URL, height: 180, width: 320, name: "medium", url: "mqdefault"},
{host: HOST_URL, height: 90, width: 120, name: "default", url: "default"},
{host: HOST_URL, height: 90, width: 120, name: "start", url: "1"},
{host: HOST_URL, height: 90, width: 120, name: "middle", url: "2"},
{host: HOST_URL, height: 90, width: 120, name: "end", url: "3"},
}
# if name == "maxres.jpg"
# xd.each do |thumb|
# thumbnail_resource_path = "/vi/#{id}/#{thumb[:url]}.jpg"
# if get_ytimg_pool("i").client &.head(thumbnail_resource_path, headers).status_code == 200
# name = thumb[:url] + ".jpg"
# break
# end
# end
post "/api/admin/delete" do |env|
Handling::Admin.delete_file(env)
# end
url = "/vi/#{id}/#{name}"
REQUEST_HEADERS_WHITELIST.each do |header|
if env.request.headers[header]?
headers[header] = env.request.headers[header]
end
end
end
post "/api/admin/deleteiplimit" do |env|
Handling::Admin.delete_ip_limit(env)
end
begin
# meow = HTTP::Client.new(url)
post "/api/admin/fileinfo" do |env|
Handling::Admin.retrieve_file_info(env)
end
get "/api/admin/torexitnodes" do |env|
Handling::Admin.retrieve_tor_exit_nodes(env, @@exit_nodes)
# XD_CLIENT.get(url, headers: headers) do |resp|
# return self.proxy_image(env, resp)
# end
get_ytimg_pool("i").client &.get(url, headers) do |resp|
return self.proxy_image(env, resp)
end
rescue ex
puts "#{ex.message} + #{ex.inspect}"
end
end
end

View file

@ -1,270 +0,0 @@
module Utils
extend self
def create_db
if !SQL.query_one "SELECT EXISTS (SELECT 1 FROM sqlite_schema WHERE type='table' AND name='#{CONFIG.dbTableName}')
AND EXISTS (SELECT 1 FROM sqlite_schema WHERE type='table' AND name='#{CONFIG.ipTableName}');", as: Bool
LOGGER.info "Creating sqlite3 database at '#{CONFIG.db}'"
begin
SQL.exec "CREATE TABLE IF NOT EXISTS #{CONFIG.dbTableName}
(original_filename text, filename text, extension text, uploaded_at text, checksum text, ip text, delete_key text, thumbnail text)"
SQL.exec "CREATE TABLE IF NOT EXISTS #{CONFIG.ipTableName}
(ip text UNIQUE, count integer DEFAULT 0, date integer)"
rescue ex
LOGGER.fatal "#{ex.message}"
exit(1)
end
end
end
def create_files_dir
if !Dir.exists?("#{CONFIG.files}")
LOGGER.info "Creating files folder under '#{CONFIG.files}'"
begin
Dir.mkdir("#{CONFIG.files}")
rescue ex
LOGGER.fatal "#{ex.message}"
exit(1)
end
end
end
def create_thumbnails_dir
if !CONFIG.thumbnails
if !Dir.exists?("#{CONFIG.thumbnails}")
LOGGER.info "Creating thumbnails folder under '#{CONFIG.thumbnails}'"
begin
Dir.mkdir("#{CONFIG.thumbnails}")
rescue ex
LOGGER.fatal "#{ex.message}"
exit(1)
end
end
end
end
def check_old_files
LOGGER.info "Deleting old files"
dir = Dir.new("#{CONFIG.files}")
# Delete entries from DB
SQL.exec "DELETE FROM #{CONFIG.dbTableName} WHERE uploaded_at < date('now', '-#{CONFIG.deleteFilesAfter} days');"
# Delete files
dir.each_child do |file|
if (Time.utc - File.info("#{CONFIG.files}/#{file}").modification_time).days >= CONFIG.deleteFilesAfter
LOGGER.debug "Deleting file '#{file}'"
begin
File.delete("#{CONFIG.files}/#{file}")
rescue ex
LOGGER.error "#{ex.message}"
end
end
end
# Close directory to prevent `Too many open files (File::Error)` error.
# This is because the directory class is still saved on memory for some reason.
dir.close
end
def check_dependencies
dependencies = ["ffmpeg"]
dependencies.each do |dep|
next if !CONFIG.generateThumbnails
if !Process.find_executable(dep)
LOGGER.fatal("'#{dep}' was not found")
exit(1)
end
end
end
# TODO:
# def check_duplicate(upload)
# file_checksum = SQL.query_all("SELECT checksum FROM #{CONFIG.dbTableName} WHERE original_filename = ?", upload.filename, as:String).try &.[0]?
# if file_checksum.nil?
# return
# else
# uploaded_file_checksum = hash_io(upload.body)
# pp file_checksum
# pp uploaded_file_checksum
# if file_checksum == uploaded_file_checksum
# puts "Dupl"
# end
# end
# end
def hash_file(file_path : String) : String
Digest::SHA1.hexdigest &.file(file_path)
end
def hash_io(file_path : IO) : String
Digest::SHA1.hexdigest &.update(file_path)
end
# TODO: Check if there are no other possibilities to get a random filename and exit
def generate_filename
filename = Random.base58(CONFIG.fileameLength)
loop do
if SQL.query_one("SELECT COUNT(filename) FROM #{CONFIG.dbTableName} WHERE filename = ?", filename, as: Int32) == 0
return filename
else
LOGGER.debug "Filename collision! Generating a new filename"
filename = Random.base58(CONFIG.fileameLength)
end
end
end
def generate_thumbnail(filename, extension)
# Disable generation if false
return if !CONFIG.generateThumbnails
LOGGER.debug "Generating thumbnail for #{filename + extension} in background"
process = Process.run("ffmpeg",
[
"-hide_banner",
"-i",
"#{CONFIG.files}/#{filename + extension}",
"-movflags", "faststart",
"-f", "mjpeg",
"-q:v", "2",
"-vf", "scale='min(350,iw)':'min(350,ih)':force_original_aspect_ratio=decrease, thumbnail=100",
"-frames:v", "1",
"-update", "1",
"#{CONFIG.thumbnails}/#{filename}.jpg",
])
if process.normal_exit?
LOGGER.debug "Thumbnail for #{filename + extension} generated successfully"
SQL.exec "UPDATE #{CONFIG.dbTableName} SET thumbnail = ? WHERE filename = ?", filename + ".jpg", filename
else
end
end
# Delete socket if the server has not been previously cleaned by the server
# (Due to unclean exits, crashes, etc.)
def delete_socket
if File.exists?("#{CONFIG.unix_socket}")
LOGGER.info "Deleting old unix socket"
begin
File.delete("#{CONFIG.unix_socket}")
rescue ex
LOGGER.fatal "#{ex.message}"
exit(1)
end
end
end
def delete_file(env)
fileinfo = SQL.query_all("SELECT filename, extension, thumbnail
FROM #{CONFIG.dbTableName}
WHERE delete_key = ?",
env.params.query["key"],
as: {filename: String, extension: String, thumbnail: String | Nil})[0]
# Delete file
File.delete("#{CONFIG.files}/#{fileinfo[:filename]}#{fileinfo[:extension]}")
if fileinfo[:thumbnail]
# Delete thumbnail
File.delete("#{CONFIG.thumbnails}/#{fileinfo[:thumbnail]}")
end
# Delete entry from db
SQL.exec "DELETE FROM #{CONFIG.dbTableName} WHERE delete_key = ?", env.params.query["key"]
LOGGER.debug "File '#{fileinfo[:filename]}' was deleted using key '#{env.params.query["key"]}'}"
msg("File '#{fileinfo[:filename]}' deleted successfully")
end
MAGIC_BYTES = {
# Images
".png" => "89504e470d0a1a0a",
".heic" => "6674797068656963",
".jpg" => "ffd8ff",
".gif" => "474946383",
# Videos
".mp4" => "66747970",
".webm" => "1a45dfa3",
".mov" => "6d6f6f76",
".wmv" => "󠀀3026b2758e66cf11",
".flv" => "󠀀464c5601",
".mpeg" => "000001bx",
# Audio
".mp3" => "󠀀494433",
".aac" => "󠀀fff1",
".wav" => "󠀀57415645666d7420",
".flac" => "󠀀664c614300000022",
".ogg" => "󠀀4f67675300020000000000000000",
".wma" => "󠀀3026b2758e66cf11a6d900aa0062ce6c",
".aiff" => "󠀀464f524d00",
# Whatever
".7z" => "377abcaf271c",
".gz" => "1f8b",
".iso" => "󠀀4344303031",
# Documents
"pdf" => "󠀀25504446",
"html" => "<!DOCTYPE html>",
}
def detect_extension(file) : String
file = File.open(file)
slice = Bytes.new(16)
hex = IO::Hexdump.new(file)
# Reads the first 16 bytes of the file in Heap
hex.read(slice)
MAGIC_BYTES.each do |ext, mb|
if slice.hexstring.includes?(mb)
return ext
end
end
""
end
def retrieve_tor_exit_nodes
LOGGER.debug "Retrieving Tor exit nodes list"
HTTP::Client.get(CONFIG.torExitNodesUrl) do |res|
begin
if res.success? && res.status_code == 200
begin
File.open(CONFIG.torExitNodesFile, "w") { |output| IO.copy(res.body_io, output) }
rescue ex
LOGGER.error "Failed to write to file: #{ex.message}"
end
else
LOGGER.error "Failed to retrieve exit nodes list. Status Code: #{res.status_code}"
end
rescue ex : Socket::ConnectError
LOGGER.error "Failed to connect to #{CONFIG.torExitNodesUrl}: #{ex.message}"
rescue ex
LOGGER.error "Unknown error: #{ex.message}"
end
end
end
def load_tor_exit_nodes
exit_nodes = File.read_lines(CONFIG.torExitNodesFile)
ips = [] of String
exit_nodes.each do |line|
if line.includes?("ExitAddress")
ips << line.split(" ")[1]
end
end
return ips
end
def ip_address(env) : String
begin
return env.request.headers.try &.["X-Forwarded-For"]
rescue
return env.request.remote_address.to_s.split(":").first
end
end
def protocol(env) : String
begin
return env.request.headers.try &.["X-Forwarded-Proto"]
rescue
return "http"
end
end
def host(env) : String
begin
return env.request.headers.try &.["X-Forwarded-Host"]
rescue
return env.request.headers["Host"]
end
end
end

View file

@ -1,44 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title> <%= host %> </title>
<link rel="stylesheet" href="styles.css">
<link rel="icon" href="./favicon.gif" type="image/gif" />
<script src="script.js"></script>
</head>
<body>
<div class="container">
<h1 style="font-size: 68px; text-align: center; margin: 20px;"><%= host %></h1>
<p style="text-align: center; font-size: 22px;"><%= CONFIG.siteInfo %></p>
<div id="drop-area">
<p style='padding: 0;margin: 0; color: #123718bf;'>Arrastra, Pega o Selecciona archivos.</p>
<input type="file" id="fileElem" accept="*/*" style="display: none;">
<!-- <label for="fileElem" class="button">Select File</label> -->
</div>
<div id="upload-status"></div>
</div>
<div>
<div style="text-align:center;">
<p>
<a href='./chatterino.png'>Chatterino Config</a> |
<a href='./sharex.sxcu'>ShareX Config</a> |
<a href='https://codeberg.org/Fijxu/file-uploader-crystal'>
file-uploader-crystal (BETA <%= CURRENT_TAG %> - <%= CURRENT_VERSION %> @ <%= CURRENT_BRANCH %>)
</a>
</p>
<p>Archivos alojados: <%= files_hosted %></p>
<% if CONFIG.blockTorAddresses %>
<p style="color: red"><%= CONFIG.torMessage %></p>
<% end %>
<% if !CONFIG.alternativeDomains.empty? %>
<p>
<% CONFIG.alternativeDomains.each do | domain | %>
<a href="https://<%= domain %>"><%= domain %></a>
<% end %>
</p>
<% end %>
</div>
</body>
</html>