I used to archive websites manually by visiting web.archive.org and typing out the websites. This was very slow and innefficient. Gladly i found a better approach that consists in define this Bash function:

# archive an entire site with archive.org
# @usage archive
archive() (
    set -e
    read -ep 'Website (Example: https://example.com): ' site
    printf "\n\rMirroring website... "
    wget -qm $site
    printf "[\e[1;32mOK\e[0m]\n"
    printf "\nArchiving website...\n"
    find $(\ls -t | head -n 1) -name "*.html" \
        -exec curl -o /dev/null -sIw "{} %{http_code}\n" \
        "https://web.archive.org/save/https://{}" ';' \
        | grep --color=auto '[0-9][0-9][0-9]$\|$'
    printf "[\e[1;32mOK\e[0m]\n"
    set +e

To archive a website simply use the newly created function:

$ archive
Website: https://sergioro.mx

Mirroring sergioro.mx...

Archiving sergioro.mx...
sergioro.mx/2019/09/25/bash-tricks-1.html 200
sergioro.mx/2019/09/25/quo-addon.html 200
sergioro.mx/tags/addon.html 200
sergioro.mx/tags/tricks.html 200
sergioro.mx/tags/firefox.html 200
sergioro.mx/tags/bash.html 200
sergioro.mx/tags/programming.html 200
sergioro.mx/tags/javascript.html 200
sergioro.mx/index.html 200

A nice trick to save quite a lot of time.