Categories
AI ChatGPT Docker Ollama Open Source AI Resources Tools & Applications Tutorials

How to Run Your Own ChatGPT-Like AI Locally for Free

In today’s digital age, privacy-conscious tech enthusiasts are seeking alternatives to cloud-based AI services. What if you could run a powerful, ChatGPT-like AI directly on your personal computer, completely free of charge? This comprehensive guide will walk you through setting up a local large language model (LLM) that gives you full control over your AI interactions. You can pick and choose any AI model such as Meta’s Llama, Google’s Gemma, and even the recently popular and controversial DeepSeek R1.

Categories
AI Open Source AI Tools & Applications Tutorials

How to Create an AI Website Chatbot with n8n

In today’s competitive online landscape, providing instant customer service can be a game-changer for your business. An AI chatbot on your website can handle inquiries, book meetings, and engage visitors 24/7 without human intervention.

This guide will show you how to create a powerful AI website chatbot using n8n in just half an hour. We’ll walk through the complete setup process, from initial configuration to deploying a fully functional chatbot on your website.

Categories
AI Open Source AI Tools & Applications Tutorials

How to Run Deepseek Locally

The Safest Way to Use AI Models on Your Computer

In the rapidly evolving world of artificial intelligence, Deepseek has emerged as a game-changer. This powerful AI model has not only dethroned ChatGPT as the #1 app on app stores but has also demonstrated that sophisticated AI capabilities can be achieved with fewer resources than previously thought possible.

But with great power comes great responsibility, especially regarding data privacy and security. This comprehensive guide will walk you through why running Deepseek locally is important and how to do it safely.

Why You Should Run Deepseek Locally Rather Than Using the App or Website

The convenience of accessing Deepseek through their app or website comes at a potential cost: your data privacy. When you use Deepseek online, everything you input is stored on their servers. This means:

  1. You no longer have exclusive control over your data
  2. The information you share could be used in ways you don’t approve of
  3. Your data is subject to the cybersecurity laws of the country where the servers are located

For Deepseek specifically, their servers are based in China, where authorities have broad powers to request access to data stored within their borders. This is a consideration regardless of which country’s government might have access to your data.

Running AI models locally keeps your data on your machine and off external servers.

How to Run Deepseek Locally: Two Excellent Options

Fortunately, running Deepseek locally has become remarkably straightforward, even for those without extensive technical knowledge. Here are two excellent options to choose from based on your comfort level with technology.

Option 1: LM Studio – Perfect for Everyone (GUI-Based)

LM Studio offers a beautiful graphical user interface that makes running local AI models accessible to everyone.

Installation Steps:

  1. Visit LM Studio’s website
  2. Download the version for your operating system (Windows, Mac, or Linux)
  3. Follow the simple installation wizard
  4. The wizard will guide you through installing your first local AI model (likely LLAMA 3 or similar)

Key Features:

  • Intuitive interface for easy navigation
  • Built-in model discovery to find and download Deepseek models
  • Hardware compatibility check that tells you if your system can handle specific models
  • Multiple quantization options for different hardware capabilities

Option 2: Ollama – Fast and Command-Line Based

For those comfortable with command-line interfaces, Ollama offers a streamlined, efficient approach to running local AI models.

Installation Steps:

  1. Visit Ollama’s website
  2. Download the version for your operating system
  3. Open your terminal or command prompt
  4. Type ollama -h to verify installation and see available commands
  5. Run Deepseek with: ollama run deepseek-r1:1.5b (for the smallest model version)

Understanding Model Sizes and Hardware Requirements

When running AI models locally, it’s crucial to understand that model size significantly impacts performance and hardware requirements.

Deepseek Model Size Options:

  • 1.5B (billion parameters) – Can run on most modern computers
  • 7B – Requires a decent GPU
  • 14B to 32B – Requires a high-end GPU (like NVIDIA 4090)
  • 70B – Requires serious GPU hardware
  • 671B – Requires enterprise-level hardware (not feasible for most users)

The model size directly correlates with its intelligence and capabilities. While smaller models may not match the performance of cloud-based options, they still offer impressive functionality while keeping your data private.

Verifying That Your Local AI Model Isn’t Phoning Home

A legitimate concern when running AI models locally is whether they’re truly “offline” or if they might be secretly accessing the internet and sharing your data. Here’s how to verify:

  1. Run a network monitoring tool while using your local AI model
  2. For Ollama, you can use a PowerShell script to monitor network connections:
    • The only connection you should see is a local listening port (typically port 11434)
    • This port allows your interface to communicate with the model but doesn’t connect to external servers
    • When downloading models, you’ll temporarily see external connections, which is normal and necessary

Maximum Security: Running Deepseek in a Docker Container

For the security-conscious user, running Deepseek inside a Docker container provides an additional layer of isolation and control.

Benefits of Using Docker:

  • Isolates the application from your operating system
  • Restricts access to network, files, and system settings
  • Allows precise control over resources and permissions
  • Provides read-only file system access for enhanced security

Requirements:

  • Docker installed on your system
  • For Windows: Windows Subsystem for Linux (WSL)
  • For GPU access: NVIDIA Container Toolkit (for NVIDIA GPUs)

Example Docker Command for Ollama:

docker run -d \
  --gpus all \
  -v ollama:/root/.ollama \
  -p 11434:11434 \
  --name ollama \
  --privileged=false \
  --cap-drop=ALL \
  --cap-add=SYS_RESOURCE \
  --memory=16g \
  --cpu-shares=8192 \
  --read-only \
  ollama/ollama

Once running, you can interact with models using:

docker exec -it ollama ollama run deepseek-r1:1.5b

Conclusion: The Future of Private AI

Running Deepseek locally represents a significant shift in how we can interact with powerful AI tools while maintaining privacy. The breakthrough of Deepseek—achieving exceptional performance with fewer resources—signals that AI development is becoming more accessible and efficient.

By choosing to run these models locally, you’re not only protecting your data but also participating in a movement toward more private, user-controlled AI experiences. As hardware capabilities continue to improve, we can expect even more powerful models to become available for local use.

Whether you choose the user-friendly LM Studio or the efficient Ollama, running Deepseek locally provides a balance of powerful AI capabilities and enhanced privacy that cloud-based solutions simply cannot match.

FAQ

Q: Will running models locally be as good as using ChatGPT or Deepseek online? A: Smaller models run locally won’t match the capabilities of the largest models run on powerful cloud servers. However, they still provide impressive functionality while keeping your data private.

Q: How much RAM do I need to run Deepseek locally? A: For the 1.5B model, 8GB of RAM should be sufficient. Larger models require more RAM and ideally a dedicated GPU.

Q: Can I run Deepseek locally on a Mac with Apple Silicon? A: Yes, through LM Studio or Ollama directly, but currently not with Docker as it doesn’t support GPU access for Apple Silicon.

Q: Does running AI models locally use a lot of power? A: When actively using the model, especially larger ones with GPU acceleration, power consumption will increase significantly. The model only uses substantial resources when actively generating responses.

Q: How do I know which model size to choose? A: Start with the smallest (1.5B) and see if it meets your needs. If you have more powerful hardware and need more capabilities, gradually try larger models.

Categories
General

Monitor Network Connections

I developed this monitoring script after repeatedly needing to troubleshoot connectivity issues with Ollama, a local AI model runner. While working with multiple models and clients simultaneously, I needed to quickly identify which connections were active, which ports were being used, and whether connections were properly establishing or terminating. Standard task managers didn’t provide this network-specific detail. This tool offers real-time visibility into exactly which addresses and ports Ollama (or any process) is communicating with, making it significantly easier to diagnose configuration problems, optimize connection handling, and ensure proper network resource utilization without resorting to complex packet sniffers or enterprise monitoring solutions.

Categories
Git

Git Workflow Scenarios

From Solo Development to Team Collaboration

Git has revolutionized how developers manage code and collaborate on projects of all sizes. Whether you’re a solo developer building a personal project or part of a large team working on enterprise software, implementing an effective Git workflow is crucial for productivity and code quality. This article explores practical Git scenarios for both individual developers and teams, showcasing how Git’s branching, merging, and collaboration features adapt to different development environments.

Categories
Git

Essential Git Commands Reference

What is Git?

Git is a distributed version control system that tracks changes in any set of computer files, typically used for coordinating work among programmers collaboratively developing source code during software development.

Categories
NodeJS

Essential Node.js, npm, and nvm Commands Reference

Node.js Commands

Basic

# Run a JavaScript file
node app.js

# Start the REPL (interactive shell)
node

# Check Node.js version
node -v

# Run with debugging
node --inspect app.js

# Run with V8 inspector for Chrome DevTools
node --inspect-brk app.js
Categories
Bizagi Tips and Tricks

How to Use Postman to call Bizagi web services

Introduction

Postman is a powerful API testing tool that allows developers and testers to send requests, validate responses, and automate workflows. While Postman works seamlessly with JSON APIs, working with XML responses often requires additional processing.

In this article, we will:

  • Configure Postman to make web service calls.
  • Explain why XML transformation is necessary.
  • Use JavaScript in Postman’s “Tests” tab to modify XML responses dynamically.

By the end, you’ll be able to restructure API responses in real-time, making API testing more efficient.

Categories
Bizagi Tips and Tricks

Bizagi Development Handbook

As a Bizagi developer, I often find myself using the same code snippets, XML structures, and syntax repeatedly. Instead of memorizing everything, I’ve decided to create this Bizagi Development Handbook—a quick reference guide where I can store essential code, configurations, and best practices that I use regularly.

This article is a work in progress, and I’ll continue adding more content over time as I refine and expand my knowledge. Whether it’s scripting, XML structures, or integration techniques, this guide will evolve into a comprehensive resource for Bizagi development. Stay tuned for updates!

Categories
RPA UiPath

Automation Developer Professional 30-Day Study Plan

Are you looking to become a UiPath Certified Professional Automation Developer Professional? Achieving this certification can significantly boost your career in robotic process automation (RPA) and validate your expertise in UiPath.

To help you prepare efficiently, I’ve put together a structured 30-day study plan that covers all the essential topics, hands-on exercises, and exam strategies to ensure success. Let’s dive in!