Back when no one read my blog, I used to archive websites by hand. This was very slow and innefficient. Gladly i found a better approach and improved on it by defining this Bash function:

# archive an entire site with archive.org
# @usage archive
archive() (
    set -e
    read -ep 'Website (Example: https://example.com): ' site
    printf "\n\rMirroring website... "
    wget -qm $site
    printf "[\e[1;32mOK\e[0m]\n"
    printf "\nArchiving website...\n"
    find $(\ls -t | head -n 1) -name "*.html" \
        -exec curl -o /dev/null -sIw "{} %{http_code}\n" \
        "https://web.archive.org/save/https://{}" ';' \
        | grep --color=auto '[0-9][0-9][0-9]$\|$'
    printf "[\e[1;32mOK\e[0m]\n"
    set +e
)

Now I can archive any website from the command line:

$ archive
Website: https://sergioro.mx

Mirroring sergioro.mx...
[OK]

Archiving sergioro.mx...
sergioro.mx/2019/09/25/bash-tricks-1.html 200
sergioro.mx/2019/09/25/quo-addon.html 200
sergioro.mx/tags/addon.html 200
sergioro.mx/tags/tricks.html 200
sergioro.mx/tags/firefox.html 200
sergioro.mx/tags/bash.html 200
sergioro.mx/tags/programming.html 200
sergioro.mx/tags/javascript.html 200
sergioro.mx/index.html 200
[OK]

Put it in a cron job and move on.