From e5acf314d6109df29952770c26dcd12ba281807a Mon Sep 17 00:00:00 2001 From: SanJacobs Date: Sun, 9 Oct 2022 06:40:21 +0200 Subject: Finishing touches --- README.md | 55 +++++++++++++++---------------------------------------- html/1.part | 10 +++++----- send-refresh.sh | 17 +---------------- 3 files changed, 21 insertions(+), 61 deletions(-) diff --git a/README.md b/README.md index 4b04988..919aaa0 100644 --- a/README.md +++ b/README.md @@ -3,49 +3,24 @@ I refuse to use WeTransfer or FileMail or any of that stuff. The idea of using a third party service to do something so stupidly simple as to transfer files is insulting. And, because nobody I work with is going to set up an FTP server for me to drop files into, -and managing an FTP server with a bunch of different accounts for each client sounds like a hassle, -I'm doing this instead. +and managing an FTP server with a bunch of different accounts for each client to download from sounds like a hassle, +I did this instead. -Writing my own version of these services, so I can host it myself. +A quick weekend project. -## NOTE: +## Usage -This is not finished software, don't use it. +Slight warning, this is not finished polished software. +There is no settings menu. +This is customized by changing the bash script. +If you're ok with that, then have at it! +Everything you need is in `send-refresh.sh`. -## Planning +Put this on your server. +Change the html parts to suit your website, and change the bash file to suit your server. +Once that's done, you can make it run every 5 or 10 minutes using `crontab`. +It wouldn't be hard to add email notifications about anything that happens in the script using `mail`, either, if you want. +Go to town! :) -### The main problem +For you, as the admin, all you need to do is use your favorite file transfer protocol to put a .zip file in the chosen directory, and let the server do it's thing. -Not generating the page for every file every time the script runs. -Somehow, the script needs to know if a file has already had it's page set up. -Ok, just md5 the file, and see if the directory exists. -Do the same for when the file is beyond 14 days old, and delete said directory, and then the file. - -### The easy part - -I need a dir where I put a bunch of zip files. -Probably the home directory of a new user named "send" -Each filename will get md5sum'ed, and that will become the link. - - send.sparkburst.net/f38eba6dbbe1965bc4869621d5a6fed3/test.zip - -To generate the webpage, all I need is a few pieces of HTML to concatenate - -1. Header and down to an opening `

` -2. Title of File -3. `<\h1>` and on to half of the `` -4. filename.zip -5. Download and down the rest of the page. -6. And then a bunch of file info, from these commands: - - stat something.zip - zipinfo -1 something.zip | tree --fromfile . - -`>>` these into index.html, and you've got a webpage. -Save that at - - send.sparkburst.net/f38eba6dbbe1965bc4869621d5a6fed3/index.html - -Et voila! Download page done. - -Maybe I'll grab the style.css that is already in use for the rest of https://sparkburst.net/ diff --git a/html/1.part b/html/1.part index 78f0f95..7d5cbb4 100644 --- a/html/1.part +++ b/html/1.part @@ -7,7 +7,7 @@ Sparkburst Send - + @@ -20,17 +20,17 @@
-

Sparkburst Send

+

Sparkburst Send

Your file:

diff --git a/send-refresh.sh b/send-refresh.sh index 422f787..3df4a51 100755 --- a/send-refresh.sh +++ b/send-refresh.sh @@ -1,17 +1,5 @@ #!/bin/bash -# Recursively delete directories by age (14 days), and only exactly at the depth of the base directory -#find /path/to/base/dir/ -mindepth 1 -maxdepth 1 -type d -ctime +14 -exec rm -rf {} \; - - -# Discover files in the user's home directory, older than 14 days, and delete them -#find /path/to/home/dir/ -mindepth 1 -maxdepth 1 -type f -ctime +14 -exec rm -rf {} \; -# Discover files in the user's home directory, younger than 14 days -#find /path/to/home/dir/ -mindepth 1 -maxdepth 1 -type f -ctime -14 - -# command for getting the filename out of a full path -#basename /path/to/file.zip - # Creating an arrays of home directory files: webdir="testdir/website/" @@ -26,9 +14,6 @@ old_files=() while IFS= read -r -d $'\0'; do old_files+=("$REPLY") done < <(find $webdir* -mindepth 1 -maxdepth 1 -type f -ctime +14 -name "*.zip" -print0) -# +14 this should say - -# Looping over said array: echo " --- Young files ---" for file in "${young_files[@]}"; do @@ -60,7 +45,7 @@ for file in "${young_files[@]}"; do done -echo " --- Old files ---" +echo "\n --- Old files ---" for file in "${old_files[@]}"; do filename=$(basename "$file") checksum=$(md5sum "$file" | awk '{print $1;}') -- cgit v1.2.1