nvm (Node Version Manager) is a tool that allows you to install and manage multiple versions of Node.js on your Mac. nvm is a version manager for node.js, designed to be installed per-user, and invoked per-shell. nvm
works on any POSIX-compliant shell (sh, dash, ksh, zsh, bash), in particular on these platforms: unix, macOS, and windows WSL.
To install nvm on a Mac, you will need to follow these steps:
Install Homebrew
nvm is not available in the default package manager for Mac, so you will need to install Homebrew first. To do this, open a terminal window and run the following command:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
Install nvm
Once you have Homebrew installed, you can use it to install nvm by running the following command:
brew install nvm
Add nvm to your shell profile: To make nvm available every time you open a new terminal window, you will need to add the following line to your shell profile (e.g., ~/.bash_profile or ~/.zshrc):
source $(brew --prefix nvm)/nvm.sh
Install Node.js
Once nvm is installed, you can use it to install the latest version of Node.js by running the following command:
nvm install node
How to use a specific version of NodeJS
To use a specific version of Node.js with nvm, you will need to follow these steps:
List available Node.js versions
To see a list of all available Node.js versions that you can install with nvm, run the following command:
nvm ls-remote
Install the desired version
To install a specific version of Node.js, such as version 16, use the following command:
nvm install 16
Use the installed version
Once the desired version of Node.js is installed, you can use it by running the following command:
nvm use 16
Set the default version: If you want to use a specific version of Node.js by default, you can set it as the default version using the following command:
nvm alias default 16
Also Read: How to Build a Node.js application with Docker in 5 Minutes
Unlimited Learning at Your Fingertips
Want to stay up-to-date on the latest tech tips?
Keep Reading
-
Testcontainers and Playwright
Discover how Testcontainers-Playwright simplifies browser automation and testing without local Playwright installations. Learn about its features, limitations, compatibility, and usage with code examples.
-
Getting Started with the Low-Cost RPLIDAR Using NVIDIA Jetson Nano
Conclusion Getting started with low-code RPlidar with Jetson Nano is an exciting journey that can open up a wide range of possibilities for building robotics projects. In this blog post, we covered the basic steps to get started with low-code RPlidar with Jetson Nano, including setting up ROS, installing the RPlidar driver and viewing RPlidar…
-
Docker and Wasm Containers – Better Together
Learn how Docker Desktop and CLI both manages Linux containers and Wasm containers side by side.
-
Deploying NVIDIA NIM for Generative AI Applications
NVIDIA’s NIM (Neural Inference Microservices) provides developers an efficient way to deploy optimized AI models from various sources, including community partners and NVIDIA itself. As part of the NVIDIA AI Enterprise suite, NIM offers a streamlined path to quickly iterate on and build innovative generative AI solutions. With NIM, you can easily deploy a microservice…
-
Integration of Model Context Protocol and Docker AI Agent under Docker Desktop
I recently had the opportunity to collaborate with Raveendiran RR, a Docker Community Speaker and Generative AI enthusiast, to present on this exciting topic at Cloud-Native LLMOps Day in Bengaluru. Together, we explored the transformative potential of Model Context Protocol in modern AI development workflows, sharing insights with the vibrant tech community. This blog post expands on the key…
-
Deploy DeepSeek-R1 using Ollama-Operator on Kubernetes
In this demo, we will walk through the steps to deploy the DeepSeek-R1 quantitative model on your machine using the Ollama Operator for Kubernetes. This operator makes it easy to run large language models on your cluster. The demo below includes installing the operator, deploying a model, and accessing it. Prerequisites A running Kubernetes cluster…