October 10, 2025
How to Set Up Coding Tools Efficiently: Developer Guide

Here's something nobody wants to admit: most development teams spend their first two weeks fighting with environment setup. Not because the tools are complicated. Because everyone treats setup like a one-time event instead of a system.
Think about what happens when a new developer joins your team. They get a laptop, maybe a half-written wiki page, and a senior developer who's "happy to help" but secretly dreading the next three days of Slack messages. The new person installs things in the wrong order. Something breaks. They Google the error. They install more things. More stuff breaks. Eventually, after enough trial and error, things work. Nobody knows why.
The weird part? This happens at companies with hundreds of developers. Places that obsess over code review and deployment pipelines somehow accept that onboarding is just going to be painful. It's like having a beautiful house but making everyone climb through the window because nobody fixed the front door.
The solution isn't better documentation. It's treating setup the way you'd treat any other code: as something that should work the same way every time.
Why Setup Matters More Than You Think
Most developers think environment setup is boring. They're wrong. It's actually one of the most important things a team does, because it determines what "normal" looks like for everyone.
If setup takes two weeks and involves seventeen manual steps, that becomes your baseline. New tools get evaluated on whether they're worth adding to those seventeen steps. Experimentation becomes expensive. Teams ossify.
But if you can get someone productive in 15 minutes, everything changes. Trying a new database? Sure, takes five minutes. Want to work on a different project? Easy. Computer dies? Get a new one and you're back in 20 minutes.
The difference isn't just time. It's what kind of team you become.
Windows environments get 30% faster I/O performance with Dev Drive, which matters if you're running containers. But the real win isn't the speed. It's that you can tell Windows developers: "Here's the script, run it, you're done." No caveats about which version of Windows or whether they need to install Visual Studio first or any of that.
The 15-Minute Reality Check
Can someone on your team go from a blank laptop to pushing code in 15 minutes? If not, you've got a setup problem.
Here's what that actually looks like:
First, you need a package manager. Not because package managers are intrinsically good, but because installing things manually is how you get into "works on my machine" hell.
On Windows, that's Chocolatey:
Set-ExecutionPolicy Bypass -Scope Process -Forceiex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
On macOS, it's Homebrew:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
On Linux:
sudo apt update && sudo apt upgrade -ysudo apt install -y build-essential curl wget git
Notice these take about three minutes. Not because the commands are fast, but because you're not making decisions. Decision-making is what kills setup time. Every time someone has to Google "should I install this?" you've lost 20 minutes.
Then install the core tools. On Windows:
choco install vscode git docker-desktop -y
On macOS:
brew install --cask visual-studio-code git docker
This works because these tools are the same tools everyone uses. You're not picking the "best" editor or the "right" version control. You're picking the ones that work.
Configure Git:
git config --global user.name "Developer Name"git config --global user.email "developer@company.com"ssh-keygen -t ed25519 -C "developer@company.com"
Install VS Code extensions:
code --install-extension ms-python.pythoncode --install-extension ms-vscode.vscode-typescript-nextcode --install-extension ms-azuretools.vscode-dockercode --install-extension eamodio.gitlens
That's it. Eight commands, 15 minutes, done. The person can now write code, commit it, and run containers. Everything else is extra.
The validation is simple. Can they run git --version, docker --version,
and code --version
? Can Docker run hello-world
? Then setup worked.
What Most Teams Get Wrong
Most teams think the problem is complexity. They're trying to support too many tools, too many languages, too many configurations. So they write documentation. Lots of documentation.
This is backwards. The problem isn't complexity, it's variability.
Think about it. Your production environment is highly standardized. Everyone runs the same OS, the same database version, the same everything. But development environments? Those are unique snowflakes. Jimmy's running Python 3.8 because that's what was current when he joined. Sarah's using Python 3.11 because she read it's faster. Bob's using Python 2.7 because there's one script that won't migrate.
This makes no sense. If your team wouldn't accept "we'll just run different OS versions in production and hope it works," why accept it in development?
The fix is simple but not easy: everyone runs the same versions. Not approximately the same. Exactly the same.
Here's how that works with Node.js:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bashnvm install 18.15.0nvm use 18.15.0
And Python:
curl https://pyenv.run | bashpyenv install 3.11.2pyenv global 3.11.2
And Java, using SDKMAN:
curl -s "https://get.sdkman.io" | bashsource "$HOME/.sdkman/bin/sdkman-init.sh"sdk install java 17.0.2-open
Then you put this in your project:
# .tool-versionsnodejs 18.15.0python 3.11.2java openjdk-17.0.2
Now when someone opens the project, they get the right versions. Not "compatible" versions. The right versions.
The Container Insight
Here's where it gets interesting. Even with perfect version management, you still have the OS problem. Development happens on macOS, Windows, and Linux. Production runs on Linux. How do you make that work?
The old answer was "make development look like production." Use Linux for development, or use VMs, or use WSL2. These all work, sort of. But they're solving the wrong problem.
The right answer is "make both development and production look like containers."
Here's what that looks like with VS Code Dev Containers:
{ "name": "Node.js Development", "image": "mcr.microsoft.com/devcontainers/typescript-node:18", "features": { "ghcr.io/devcontainers/features/docker-in-docker:2": {}, "ghcr.io/devcontainers/features/github-cli:1": {} }, "customizations": { "vscode": { "extensions": [ "ms-vscode.vscode-typescript-next", "esbenp.prettier-vscode" ] } }, "forwardPorts": [3000, 8080], "postCreateCommand": "npm install"}
This is a complete development environment. It specifies the OS (implicitly Linux), the runtime (Node 18), the tools (Docker, GitHub CLI), and even which VS Code extensions to use. Someone opens the project, and VS Code says "you want to open this in a container?" They click yes, wait 30 seconds, and they're in a working environment.
No "did you install Node?" No "which version of Node?" No "did you remember to install TypeScript globally?" All of that is in the container definition.
The really clever bit is that this works the same way in production. You can use the same container image for development and deployment. Not "similar" images. The same image. Suddenly "works on my machine" becomes impossible, because everyone's machine is the same machine.
Why This Actually Matters
You might think this is just about convenience. It's not. It's about what kind of work becomes possible.
When setup is hard, teams become conservative. Nobody wants to try new tools or languages because getting everyone set up would be a nightmare. The Python team can't easily help the JavaScript team because switching contexts means reinstalling everything.
When setup is easy, teams become fluid. Someone can work on the frontend in the morning and the backend in the afternoon. New libraries get evaluated quickly because "let's try it" doesn't mean "let's spend a day getting it working."
This compounds. Teams that experiment more learn faster. Teams that learn faster build better software. Teams that build better software attract better developers. Better developers demand better tools, which makes setup even easier.
It's a flywheel, but you have to get it spinning.
The Security Thing Nobody Talks About
Here's something that should be obvious but isn't: if everyone's development environment is different, you can't secure them.
Think about what happens with manual setup. Someone Googles "how to install PostgreSQL on Mac." They find a tutorial from 2019. They follow it. The tutorial says to disable some security feature because otherwise it won't work. They disable it. It works. They move on.
Now multiply that by every tool, every developer, every setup. You've got dozens of unique security configurations, all slightly wrong in different ways.
With containers, this can't happen. The container definition is code. It goes through code review. Someone says "why are we disabling that security feature?" You fix it or you document why it's necessary. Then everyone gets the fixed version.
This matters more than most teams realize. Cortex DevOps security research shows that development environment compromises are way more common than production breaches. Makes sense. Production gets scrutinized. Development environments are trusted by default.
The fix is making development environments as reproducible as production. Here's what that looks like:
FROM node:18.15.0-alpineRUN addgroup -g 1001 -S nodejs && adduser -S nodejs -u 1001USER nodejsWORKDIR /appCOPY --chown=nodejs:nodejs . .RUN npm ci --only=production && npm cache clean --forceEXPOSE 3000CMD ["node", "server.js"]
Notice what's happening here. The container runs as a non-root user. It uses a specific, minimal base image. It cleans the cache after installing dependencies. These aren't accidents. They're security decisions encoded in the setup.
Now when someone copies this, they get the security decisions too. They don't have to remember to create a non-root user. They don't have to know that Alpine is more secure than Ubuntu. They just run the container and it's secure.
The Cross-Platform Problem
Here's a fun fact: most teams that claim to support Windows don't actually support Windows. They support macOS and Linux, and Windows developers suffer through WSL or VMs or just constant frustration.
This happens because testing cross-platform compatibility is hard. You can't just write code on a Mac and assume it'll work on Windows. File paths are different. Line endings are different. Permissions work differently. Tools have different names.
The traditional solution is "use Linux everywhere." But that's not realistic. Designers use Macs. Some enterprises mandate Windows. Telling people to switch OSes for development is losing battle.
The container solution works because the code doesn't run on the host OS. It runs in Linux, regardless of the host. A VS Code Dev Container on Windows is running exactly the same Linux environment as one on macOS.
For Windows specifically, Microsoft's WSL2 changes everything. It's not a VM. It's actual Linux running inside Windows with proper integration. File system access is fast. Networking works. It's genuinely good.
You can configure it like this:
wsl --install -d Ubuntu-22.04wsl --set-default-version 2echo '[wsl2]' > $env:USERPROFILE\.wslconfigecho 'memory=8GB' >> $env:USERPROFILE\.wslconfigecho 'processors=4' >> $env:USERPROFILE\.wslconfig
Now Windows developers can use the same tools as everyone else. Not "similar" tools. The same tools. The same scripts. The same containers.
Why GitHub Codespaces Changes Things
The really interesting development isn't better local tools. It's not needing local tools at all.
GitHub Codespaces takes the dev container idea and runs it in the cloud. You click a button, wait 30 seconds, and you're in a full development environment in your browser. Same container, same tools, same extensions. But you didn't install anything.
This seems like a small thing. It's not. It means development environments can be as disposable as cloud servers. Need to test a fix in an old branch? Spin up a codespace for that branch. Done with it? Delete it. Want to try a risky refactoring? Do it in a codespace so you don't mess up your main environment.
It also means that "powerful laptop" stops being a requirement. You can develop on a Chromebook or an iPad because the actual work is happening on a 32-core cloud machine. The laptop just displays the interface.
The configuration looks like this:
{ "name": "Enterprise Development Environment", "image": "mcr.microsoft.com/devcontainers/universal:linux", "features": { "ghcr.io/devcontainers/features/node:1": {"version": "18"}, "ghcr.io/devcontainers/features/python:1": {"version": "3.11"}, "ghcr.io/devcontainers/features/docker-in-docker:2": {} }, "postCreateCommand": "npm install && pip install -r requirements.txt", "forwardPorts": [3000, 8080, 5432]}
That's it. That file in your repository defines the entire development environment. New developer joins? They clone the repo and click "Open in Codespace." They're productive immediately.
What This Means for Teams
The shift from "development environments are artisanal and unique" to "development environments are code" changes how teams work.
First, onboarding becomes trivial. Remember that two-week nightmare? Now it's 15 minutes of watching install scripts run. The new person doesn't need tribal knowledge about which tools to install or which settings to change. They just run the setup and it works.
Second, experimentation becomes cheap. Want to try a new framework? Make a branch, update the container definition, see if you like it. If not, delete the branch. You didn't install anything on your laptop, so there's nothing to uninstall.
Third, consistency becomes automatic. Everyone's running the same versions because the versions are defined in code. You can't have version drift because there's nowhere for versions to drift. They're locked in the container.
Fourth, documentation becomes unnecessary. Not the "how this code works" documentation. The "how to get this code running" documentation. That's all in the setup scripts now. The scripts are the documentation, and unlike wiki pages, they can't get out of date.
The Broader Implication
Here's what's actually happening: development is becoming less about configuring environments and more about writing code.
That sounds obvious, but think about what percentage of developer time goes into environment issues. "Works on my machine but not in CI." "Need to upgrade Node for this project but can't because that other project needs the old version." "Spent three hours debugging, turns out I had the wrong Java version."
All of that goes away with proper environment management. And when it goes away, development gets faster. Not 10% faster. Way faster. Microsoft's data on Dev Drive shows 30% performance improvements just from better I/O. But the real speedup is from not having to think about setup anymore.
This is similar to what happened with cloud infrastructure. Twenty years ago, getting a server meant weeks of procurement, hardware setup, OS installation, network configuration. Now you click a button and have a server in 60 seconds. Development environments are going through the same transition.
The teams that figure this out first will have a massive advantage. They'll onboard faster, experiment more, and build better software. The teams that don't will keep losing weeks to setup and wondering why they're falling behind.
The good news? This isn't hard to fix. You just need to treat setup like code: version it, test it, and make it reproducible. Start with one project. Get it working. Then do the next one. Eventually, every project has a setup that works.
And then, finally, that front door is fixed. People can just walk in.
Want to see how this works with intelligent code completion? Try Augment Code for development assistance that integrates with your setup.

Molisha Shah
GTM and Customer Champion