I feel this way about questions that disparage good software practices.
There was an email chain at work about a possible bug that got sidetracked into a discussion about how all work should be done in standardized environments; there should be some kind of local VM setup, shared development environment or similar. Something about this didn’t sit well with me and I’d like to explore why. At the very least it is in direct conflict with my firm belief that your new hire should be able to check out a project on a clean machine and have it Just Work. Either my gut feeling about standardized environments is on track or that belief is misguided.
The reasons for this stance on environments make sense; if your development environment is identical to your production environment then there will be less room for surprises like subtle gotchas in interpreter, database, and other service versions. You don’t have to worry about urls to services being different in different environments. If you need to use some kind of cache you can get it setup and then rolled out to everyone to use. If there’s a file you need to access you always know where to find it and…
There is a danger that lurks in the shadows. The closer you work with one, specific environment the more your application will become tangled up with it. You will take for granted that a file exists at a specific place and so have a class or function access it directly. Or that a hostname will always be the same, or that credentials to a database will never change, so you hardcode them. You become reliant on an esoteric extension to your environment and then forget about it. These creep into your codebase and ever so slowly make it brittle.
Now that your database connections or web service calls are baked in you lose the flexibility to swap in a stub for testing or creating a new implementation when you need to store the data differently. When the reads or writes to that file create race conditions or become unscalable it’s going to be tough to remedy. And what happens when that esoteric extension stops being maintained but you want to upgrade language versions? Over 1k classes depend directly on it! Should you just stay a version behind? 2 versions behind? 3? 10?
I hear a bit of, “That’s impossible! There are databases to connect to! Web services to retrieve data from! This application needs those things or it cannot run!” I’ll posit this means your application is too coupled to the data or the way the data is stored and retrieved. Have an in memory database for development and add some preliminary data as part of the build process. Use a hard coded stub in place of those web service calls. That 3rd party thing that requires a license? Stub that out too. You are building to interfaces, right?
Automate all that configuration with a good Configuration Management tool. Don’t let the developers stumble through where the stubs live and how to configure them. Don’t make DevOps worry about what extensions and services need to be installed in production and how they’re setup.
At the end of the day having standardized environments makes development easier for your engineers but it does not make it simpler. Don’t get me wrong, your pipeline should still take your releases through a production-like environment at some point but don’t constrain your development cycle to use one. You want that new gal who just started to break something because she’s proficient with a new language feature. Now you know that there’s reason to consider upgrading the language. That guy who just started on your team and ran into an issue because he didn’t know how to configure the application? Well now you know you need to stub that out and add it to your configuration manager.
These little gotchas help suss out problems long before they become paralizing. You want to catch them before you wake up one day and you’re working in an environment with a version of your language that is 10 years old with strange hacks and patches and on an operating system that is 15 years old. And you can’t upgrade because, well, you’re just not sure what would happen.
All these little costs can be hard to measure. How much developer time is being wasted on ceremony when there’s a new library that automates this common task? How much time goes into writing unit tests to cover cases that a newer version of the language prevents from happening? How many opportunities are lost to work remotely because your application requires 100 remote connections per request and the airline wifi isn’t fast enough? How many good interview candidates walked in the door and then right back out because because they would have had to work in a stack that was 10 years old?