Your Own LLMs: Running LLMs on your computer

This workshop will show you how to set up and run open-source Large Language Models (LLMs) directly on your own computer—giving you private, fast, and flexible AI without relying on cloud services. You’ll learn how to use LLMs for tasks like writing, analysis, automation, research, and coding.

 

By the end of the session, you will know:

• What hardware local LLMs require

• How to install and run popular models (LLaMA, Mistral, Phi, Gemma, Qwen, etc.)

• How to use tools like LM Studio, Ollama, and text-generation-webui

• How to choose the right model size and quantization

• How to integrate local LLMs into everyday workflows

• The privacy benefits of running models locally

Date:
Tuesday, February 10, 2026
Time:
5:00pm - 5:45pm

Registration is required. There are 7 seats available.