https://dev.to/timesurgelabs/how-to-run-llama-3-locally-with-ollama-and-open-webui-297d
https://medium.com/@blackhorseya/running-llama-3-model-with-nvidia-gpu-using-ollama-docker-on-rhel-9-0504aeb1c924
https://docs.docker.com/compose/gpu-support/