top of page
Chandan Kumar

Setting Up a Developer Windows PC for AI Application Development - A Complete Guide

Updated: 4 days ago


The goal of this blog is to guide you through the process of setting up your AI development environment within Windows Subsystem for Linux (WSL). WSL allows you to harness the full power of Linux tools and workflows directly inside Windows, without the need for a dual-boot setup. By using WSL, you can seamlessly integrate tools like Docker, VSCode, Git, and Python, creating an optimal development environment for AI applications. This approach enables you to take full advantage of Linux’s capabilities while maintaining the user-friendly nature of Windows, making it an ideal solution for AI development.


Developing AI applications requires a powerful and efficient development environment. Whether you're working on machine learning models, natural language processing (NLP), or computer vision, having the right tools and infrastructure can dramatically improve your workflow. This blog post will walk you through the steps to set up your development PC for AI application development using Docker, VSCode, Ollama, Python, and GitHub.



Why These Tools?

Let's break down the tools we're using and why they're important:

  • WSL: For Windows users, Windows Subsystem for Linux (WSL) allows you to run a full Linux distribution on your Windows PC. This is necessary for many tools, including Docker, which rely on Linux-based environments for smooth operation. WSL bridges the gap, allowing you to use Linux tools on your Windows machine without the complexity of dual-booting or setting up a virtual machine.

  • Docker: Docker is a containerization platform that allows you to package your applications and all their dependencies into a container. This ensures that your code runs consistently across different environments (local, cloud, or production).

  • VSCode: Visual Studio Code is a lightweight but powerful code editor with extensions that support Python development, Docker integration, and more.

  • Ollama: Ollama is a tool that helps you run AI models locally without needing to rely on cloud-based solutions. It's particularly useful when working on AI models that need constant iteration.

  • Python: The most commonly used language in AI development, Python supports an extensive ecosystem of libraries and frameworks such as TensorFlow, PyTorch, Scikit-learn, and more.

  • GitHub: GitHub is a cloud-based version control system that enables you to manage your codebase, collaborate with others, and easily integrate with various CI/CD tools.


With these tools, you'll be equipped to handle most AI development tasks while ensuring a consistent and efficient workflow. Leveraging WSL for your development setup will streamline your work and provide the flexibility and power of Linux-based tools while staying within the Windows ecosystem.


Enable and Set Up WSL (For Windows Users)


If you're using Windows and want to leverage the power of Docker, it's essential to enable and set up WSL (Windows Subsystem for Linux). WSL allows you to run a Linux environment directly on Windows, which is crucial for running Docker containers and many development tools that are designed for Linux.


For the records, despite being a great tool, setting up WSL on Windows is not without hoops.


Install WSL


  1. Enable WSL in Windows Feature ( start -> ( type feature )


Enable following features


  • Virtual Machine Platforms

  • Windows Subsystem for Linux


Windows WSL Feature

  • Do Windows Update and restart the system to ensure all packages are upto date.


  • Open the Powershell ( with Terminal Application -- Not the Windows Command line )


wsl.exe --list --online
wsl list distros

  • Install Ubuntu 22 LTS


wsl.exe --install Ubuntu-22.04
wsl install ubuntu

You should be able to see Ubuntu into your Terminal Tab


WSL Option in Windows Terminal

Install Docker


Docker is a must-have for managing the dependencies and environments for your AI projects. Here’s how to get started with Docker on your machine:


Install Docker Desktop


  • Download Docker: Go to the Docker download page and select the appropriate version for your operating system (Windows/Mac/Linux).

  • Install Docker: Follow the installation instructions specific to your operating system.

    • Windows: Make sure to enable WSL 2 (Windows Subsystem for Linux) during installation.

    • Mac: Docker Desktop integrates easily with macOS.

    • Linux: You can follow the official instructions to set up Docker for your specific distribution.

  • Verify Installation: Open a terminal or command prompt and run the following command to ensure Docker is installed correctly:

docker --version

Docker Hello world:

docker run hello-world


Install VSCode


Visual Studio Code is one of the best code editors for AI development due to its versatility, extension ecosystem, and support for various programming languages.


Install VSCode


  1. Download VSCode: Go to the VSCode website and download the appropriate version for your OS.

  2. Install VSCode: Follow the installation instructions for your operating system.

  3. Install Extensions:

    • Python Extension: Search for the Python extension in the VSCode marketplace to enable linting, IntelliSense, and code debugging.

    • Docker Extension: This allows you to interact with Docker directly from VSCode. It simplifies container management.

    • GitHub Extension: The GitHub extension helps you manage your repositories and workflows directly within VSCode.


Install Ollama for AI Models


Ollama enables you to run AI models locally on your machine, bypassing the need for remote cloud services. It’s ideal for experimenting with and testing AI models.

Install Ollama

  1. Download Ollama: Visit the Ollama website to download the installer for your operating system.

  2. Install Ollama: Follow the installation instructions provided on the website.

  3. Using Ollama: Once installed, you can use Ollama to pull various AI models and run them on your local machine.

Example:

ollama run llama3.2:1b
Ollama with llama 3.2 1b

Alternatively ( Recommended ) Dockerized Ollama

docker run  -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Ollam Docker image

Install Python & AI Libraries


Python is the go-to programming language for AI development. To work with AI models, you need the right libraries.


Install Python

  • Install Windows Native Python

Type python3 into the terminal it will popup with Microsoft store python ready to install


MS Store Python3


  • Verify Installation:

PS C:\Users\chand> python3
Python 3.13.1 (tags/v3.13.1:0671451, Dec  3 2024, 19:06:28) [MSC v.1942 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> print ("Hello world")
Hello world
>>>

  • Recommended Approach


Python generally works well on Python itself, but due to Python being native to Linux and macOS, it doesn't always perform the same on Windows. Unless you have set up the full MSDN developer suite (including C++ compilers), you may encounter issues with certain libraries.


To avoid further problems, set up your Python development environment in WSL/Ubuntu.



Set Up GitHub for Version Control ( in WSL )


Version control is essential for managing your AI projects, especially when collaborating with others. GitHub is one of the most popular platforms for version control.

Set Up Git and GitHub

apt install git -y
  • Configure Git:

git config --global user.name "Chandan Kumar" 
git config --global user.email "chandan.kumar@db-agent.com"
  • Create a GitHub Repository:

  • Go to GitHub and create a new repository.

  • Clone the repository to your local machine:

  • Commit Changes: As you develop your AI models, make sure to commit changes regularly.

git add . 
git commit -m "Initial commit" 
git push origin main

Putting It All Together


Now that all the essential tools are installed, here’s how you can put them together for a streamlined workflow:


  • Use Docker to create isolated environments for different projects.

  • Use VSCode for editing your Python code, managing Docker containers, and interacting with GitHub.

  • Pull and run AI models using Ollama to test your code locally before deployment.

  • Use GitHub for version control and collaboration with your team.


Conclusion


With Docker, VSCode, Ollama, Python, and GitHub, you now have a robust and scalable development environment for AI applications. Docker ensures consistency across environments, VSCode provides a feature-rich code editor, Ollama allows you to run AI models locally, Python gives you the flexibility to build AI applications, and GitHub provides powerful version control and collaboration tools.

By setting up these tools on your developer PC, you can focus on what matters most—creating and deploying AI applications efficiently. Happy coding!

13 views0 comments

Recent Posts

See All

Contact Us

Thanks for submitting!

bottom of page