In this tutorial, I will demonstrate how to install DeepSeek on a local computer or on your own private GPU server with Ollama, Open WebUI and other tools. It took me hours to find the best options, so hopefully this tutorial will save you many hours and time!
I also demonstrate how to use my product — — to steer your AI conversations towards specific topics, concepts, or gaps and enrich your DeepSeek chat interactions with some high-quality prompts generated by it.
This tutorial is available as an article with all the installation links:
Timecodes:
0:00 Why you might need a private DeepSeek model?
0:39 How to guide your model interactions with a graph?
1:47 How to install DeepSeek with Ollama (or Llama because it’s faster)
5:28 How to install a UI for the model
6:17 What is Docker?
8:27 Starting the local DeepSeek chat
8:58 Uploading your own files and context
9:49 Using InfraNodus to guide the AI interaction
12:47 Another UI approach: using VSCode with Continue extension
15:54 Installing DeepSeek on your own public server (time saver!)
17:04 Comparing different hosting providers (AWS, Koyeb, Grow, Elest.io)
18:50 1-Click DeepSeek deployment with Koyeb for experimentation
20:54 Persistent deployment (for production)
24:08 1-Click deployment via Elest.io (persistent, but pricey)
25:45 Deploying production-ready server on AWS (very expensive and complex)
Please, support me by trying — visual AI text analysis tool!
#deepseek #infranodus
source
Disclaimer
The content published on this page is sourced from external platforms, including YouTube. We do not own or claim any rights to the videos embedded here. All videos remain the property of their respective creators and are shared for informational and educational purposes only.
If you are the copyright owner of any video and wish to have it removed, please contact us, and we will take the necessary action promptly.