Try Being Dogmatic With Programming Practices

I come across a lot of blog posts and hear a lot of conversations where someone says, “don’t be dogmatic about that”. I’m sure you’ve heard it: “Don’t be dogmatic about TDD” or “don’t be dogmatic about SOLID” or “don’t be dogmatic about KISS” or “don’t be dogmatic about patterns” or “don’t be dogmatic about Functional Programming” or “don’t be dogmatic about YAGNI”.

On first blush it seems like good advice. Of course! Don’t follow something dogmatically! Try it out, see if it works for you, and discard it if it doesn’t. But I’m starting to change my tune.

In almost all cases this seems to boil down to the author/conversationalist being unfamiliar with the practice to which they’re asking you not to be dogmatic about. Maybe they tried it, didn’t quite get it, and then wrote it off as not worth it. From that point on they saw anyone who followed the practice as being dogmatic. But the thing is, if you aren’t somewhat dogmatic about it, how do you learn something new? We have a tendency to think about things in terms of what we already know. It takes a lot of work to undo that. You see this a lot with programmers when they try to pick up a new programming language; they try it out, don’t see what it can do that their favorite language can’t already do, and write it off. This is understandable. We think about problems in terms of the languages we are familiar with. It’s the same with human languages.

I say, give it a shot. If you find value in simplicity but aren’t quite sure how to do it, then really strive for it. Build your modules maybe simpler than is necessary for the system (I know, that sounds kind of oxymoronic). If the SOLID principles seem like a good lighthouse, then follow one or two of them to a T. Sure you might end up doing some unnecessary work but you’ll be learning. You’ll eventually get to a point where it makes sense why our industry has adopted a practice and why countless books have been written on the topic. And best of all, you’ll learn when it makes sense to apply the skill and when it doesn’t belong, instead of applying or not-applying it across the board.

But here’s the thing. Don’t be so dogmatic that you insist others do it as well. This is your journey. Choose which skills and practices you’d like to hone and go after it. Maybe it’ll seem so valuable that you will want to share that with others. When that time comes, show them the value. Show them how to make software faster, cleaner, and with fewer bugs. It won’t take long before someone asks how you do it.

Advertisements

Don’t Fear Deleting Code

I’m not talking about code that was written by someone else or was written years ago and is no longer used. I’m talking about code that you wrote 5 minutes ago. But why delete code you wrote 5 minutes ago?

A lot of software developers are afraid to write code that they may not need later. And they’re terrified to write code they know they won’t need later. We’ll discuss the problem endlessly before we even try one of the solutions. I was definitely there not too long ago and I still struggle with it.

It seems we set off to write a use case with a general idea of what we want. Class names, method signatures, dependencies, we’ve got it nailed down. But then, suddenly, a change comes in we weren’t expecting. That class is now not working out so well. But we can’t just get rid of it! We put so much work into it!

Learning to delete code was one of the biggest obstacles I faced to learning TDD. I couldn’t believe that even though I knew (or thought I knew) where I wanted to go I would take these small steps to get there that would most likely end up going away in a little while. Why would you do that?!

I had an “a-ha” moment at a group pairing event lead by @dwhelan. As we got started I said something like, “I don’t understand, how do I take the first step wen I don’t even have a class or a method to do anything? Would I write something dumb like verify_class_exists()?” To which he replied, “Sure, why not?” Begrudgingly, I wrote that test. But after a couple more tests, that one seemed superfluous. Thinking I had gotten one over on Declan, “But now that first test that checks for the class existing is useless! These other tests incidentally verify that the class exists and, besides, the compiler always assured us that the class existed!” “If you don’t need it anymore, why not delete it?” It was that moment where the tide turned from TDD being a theoretical thing that seemed powerful into a tangible practice that I knew how to do. It wasn’t easy but at least I knew how to take the first steps.

Give it a shot sometime. If you don’t know how to write that next test, write something stupid like “such_and_such_method_should_exist” or “this_function_will_take_3_arguments”. It does wonders for getting you unstuck and really gets the creative juices flowing. Then delete it when it becomes redundant; a small price to pay for getting unstuck. The next time you want to drastically change a method signature, instead of changing it and then finding all the callers over the next few days/weeks, create a new method, delegate the old method to it, go around and update the callers, then delete the old method. Having the code compile and run the whole time is a great feeling. Yes, you’ll be doing some extra steps that you know you will delete in 5 minutes but the journey is very satisfying. Plus you end up getting there quicker and without any surprises when the code goes into production. It may be overkill sometimes but I find it’s a good tool to have around.

The Myth of The Software Mess

When I started programming professionally I thought that creating a mess was unavoidable. You code a module, it is beautiful and does exactly what it’s supposed to do, and it does it well. But then someone else makes a change and it’s a little different from how you would have made the change. Then The Business starts requesting more and more features and it drives the design of the system in a different direction than you intended. It gets harder and harder to shim that code together. Too bad the business didn’t know everything they wanted right up front, then the code would still be beautiful. Oh well, maybe next time.

After a few years I got better at refactoring (as opposed to rewriting) and realized that the mess didn’t have to be lived with. The business would make last minute decisions, myself and the rest of the team would make a mess getting it where it needed to be, then we’d spend some time cleaning it up. It wouldn’t be perfect as we’d never get much time before The Business wanted to do something else but it would buy us some time before the code base became all but unmanageable.

I read a blog post a while ago that was passed around at work where the author draws a parallel between software messes and a mess in the kitchen. Just as it’s hard to start adding a feature in a messy code base, it’s hard to start a meal in a messy kitchen.

Just like the kitchen; It’s OK to cause a creative mess while cooking, but clean it up right after the meal. That way you make space for the next creative mess.

This sparked some good conversation and I generally agreed with it. A messy kitchen with stuff on the counters and dishes piled in the sink is enough to make me go out to eat instead. Trying to start a meal that way is always a pain and creates all kinds of unforeseen problems like burning or overcooking as you hurry to wash off a utensil that you needed. It’s much more pleasurable to clean the kitchen first, then start the meal.

But as time went on I noticed something about how I actually work in the kitchen: The first thing I do is put away any clean dishes, wash any dirty dishes, and clean off the counter. Now I’m ready to start getting out ingredients, measuring them, preheating ovens, etc. But as I go there are little breaks in the process where I have to wait for the oven to come up to temp or for the water to boil before I can add something to it and then it has to simmer for 10 minutes before adding the next ingredient. During these in-between-times I wash measuring cups, utensils and pots that I’m done with. I wipe up spills on the counter. By the time I’ve eaten there are only a few pots and pans which are easy to clean up and there is nothing but a clean sink and counter and dishes in the rack drying. There’s no big mess at the end that I have to clean up before leaving the house.

With practice I’ve been able to get closer to this with software development. In the early days of getting good at cooking you’re very frazzled as you learn a new domain. There’s no time to clean anything as you’re trying to juggle the details of what you have to do next. It’s the same with software. There are so many languages, stacks, libraries, and frameworks to learn that you feel you have no time to cleanup. But as you get more comfortable at writing software there are less stretches of time where you’re doing nothing but furiously writing code. There are pockets of time where you can reflect on the problem and, while doing that, do some quick cleanup.

And that’s the hard part. It takes practice. One of my early problems was that, like that blog post, I wrote off messes as inevitable and so never strove to do anything about it. It doesn’t have to be that way. I’m at a point now where I can write code just as fast, if not faster, while maintaining the integrity of the code base. There’s no need to clean up afterwards and there’s no need to cleanup before the next feature. If this seems doable to you, I recommend striving for it. It’ll take time and effort but the journey is worth it. Look for those little time slots where you need to take a break from the problem domain and refactor that design that became obsolete or extract a method where you’ve got some runaway code.

Addendum

Some may read this and translate “mess” to “technical debt”. I avoided the term deliberately. Back in those early days I mentioned I never thought I was creating a mess, I thought I was creating technical debt. Creating technical debt sounds a lot better, as though you’re making an investment. I went on thinking this way until a little while ago when I’d notice people using “debt” to refer to what looked like laziness. They either knew what to do and didn’t do it or didn’t know what to do but wouldn’t take suggestions from other developers. I had been guilty of both but it became obvious to see it in action from the outside. I searched around and sure enough people like Robert Martin and Martin Fowler had written about the difference between a mess and technical debt.

Lazy Software Development

I was part of a code review recently where someone said, “I did this as a lazy alternative to X”. It didn’t look like much at first but as we dug in there was a dangerous mixing of concerns which would cause issues because of the coupling. Changes in one part of the system would have an effect on unrelated parts of the system.

A while ago someone was looking over some of my code and said, “Oh wow, you created objects and interfaces for everything. I feel so lazy.” It was weird to hear because I feel lazy.

Here’s the thing; I’m one of the laziest people you’ll ever meet in terms of software development. I create objects for everything because I’m lazy. I relentlessly separate concerns because I’m lazy. I’m obsessive about maintaining encapsulation because I’m lazy. I keep different levels of abstraction separate because I’m lazy.

I’m sure you’ve heard the factoid that the average person can hold 5-7 things in short term memory at any one time. I’m definitely on the lower end of that spectrum. The most exceptional individual can remember maybe 10 or 15 pieces of information at once. That’s not a large spread. If you think you can remember all of the details about a system you’re only fooling yourself. You have to spend quite a bit of time stumbling around and re-remembering what all of those bits of tribal knowledge are. Stop wasting that limited resource. Don’t bother remembering that every change to Foo requires a change to Bar and Baz. Remove the coupling so you don’t have to remember! Don’t waste your effort remembering you need to call `get(“pl-user-sys-name”)` to get the users name. Codify it and call `getUserName()` instead! Don’t make people remember that `uacc` is the user’s account, rename it to `userAccount`.

Having to remember that a conditional, when modified, needs to be modified in 5 different places is not lazy programming. That’s the most active programming possible. You will need to remember that every time you work on the system. You will have to tell new developers about it. And you will be the one everyone goes to when weird bugs pop up with it at 3 in the morning. Embrace lazy software development. Encapsulate your concerns so that you can stop worrying about them. Codify all that knowledge so that others can simply see the purpose of the system. They shouldn’t have to ask you. The intent of the system should be self evident.

How I Learned to Stop Worrying and Give up on Coding Standards

I’d like to take you on a journey with me.

Many years ago I was fiercely into coding standards. All indents must be tabs (I can hear the cringing). All curly braces open on the same line (blasphemy!). etc etc. But over time I started caring less and less and I’m at a point now where I hardly notice. This would all be fine and good if I lived in a vacuum and didn’t work with other people. But the way things shake out I do and this has lead to some strange interactions.

I’ve now worked in enough languages and on enough teams that I can’t keep one set of conventions separate from another. I got in the habit of spending time up front configuring my IDE to do this for me. This works some of the time. When I create a new class, for instance, it puts the braces on the correct lines and inserts the correct amount of white space between blah blah blah and so forth. When I’m typing I have certain habits that are ingrained but I can usually fix that with ctrl+shift+F or whatever the shortcut my IDE uses for formatting. But even that can’t catch everything as I often forget to do it. Also, it can’t handle every rule that someone, somewhere came up with.

I used to work at a place with strict coding standards. And when I say strict, well, man, it was pretty serious. Code reviews were used to enforce these standards. There was also a weird rule that someone else couldn’t fix broken style in your code. So if you missed a line break you would fail the code review and have to go over it and submit again. This was kind of an extreme case so I don’t want to use it to build my story.

Another place I worked did not enforce standards or make a big deal about them but they still existed. I’d do my best to setup my IDE to take care of it but transgressions would slip through. The code would always get merged (barring any kind of logic error or design flaw) but later on a separate ticket would be opened where someone “fixed” the style faux pas and generally added me and a few others as a reviewer. Lets think about that for a second. At most places nowadays the general work flow is to open a “ticket” or “task” which the work is tracked against. Next the repository is forked or branched, checked out into someones work environment, and then the code style is cleaned up. Maybe manually or maybe they have their IDE handle it. Next a pull request or patch is generated and people are “tagged” to review it. Lets say 3 people. These 3 people have to look over the code and give their approval. Say it takes each an average of 10 minutes to look over the changes, become familiar with the problem domain, make sure bugs weren’t introduced, check out the code and run any tests. To do this they have to stop what they are currently doing or wait until they get to a break point. Either way they were probably notified via some automated system that there was something they had to do. They will need to spend a certain amount of time switching gears into this task. Once they are done they will need to switch gears back to what they were working on before. This task switching comes at a HUGE cost. Programmers, by their very nature, are not good at task switching.

Conservatively, 1 hour is spent per reviewer plus another hour for the programmer who did the task. 4 hours total. Depending on where you are prices and participation may vary but once you factor in salaries, bonuses, stock options and other overhead (HR, rent, electricity, health insurance, perks) we get an average of about $200/hr times 4 hours = $800. Nearly a grand was just dropped to tidy up the style in a handful of classes.

So why the compulsion toward coding standards? It’s obviously important to many people. It was to me at one point. So what changed? Why did I stop caring? Let’s look at some code.

Say you’ve got a gigantic method which mixes concerns and levels of abstraction. Something like this:

public class CouponEmailer {
    
    public void sendCouponsTOCustomers() {
        SQLUtils.openConnection(SQLUtils.CUSTOMER_DB);
        
        
     List<map<string, object="">> Query = SQLUtils.query("SELECT * FROM CUSTOMER WHERE COUPON_OPT_IN=1");
        
        SQLUtils.closeConnection(SQLUtils.CUSTOMER_DB);
        
               SQLUtils.openConnection(SQLUtils.COUPON_DB);
        
                            for (Map<string, object=""> customer_map : Query) {
            Long numberOfCoupons =
                (Long)customer_map.get("number_of_coupons");
            
            for (int i = 0; i < numberOfCoupons; i++)
            {
    List<map<string, object="">> query1 = SQLUtils.query("SELECT * FROM COUPON ORDER BY RANDOM LIMIT 1");

    MailUtil.sendEmail((String)customer_map.get("email"), (String)query1.get(0).get("content"));
}
}

SQLUtils.closeConnection(SQLUtils.COUPON_DB);
}
    
}

Gross, huh? Lets hit ctrl+f and get that indentation all rowing together and rename some of those methods as they’re using a mixture of camel, Pascal and underscores:

public class CouponEmailer {

    public void sendCouponsToCustomers() {
        
        SQLUtils.openConnection(SQLUtils.CUSTOMER_DB);

        List<map<string, object="">> query = SQLUtils.query("SELECT * FROM CUSTOMER WHERE COUPON_OPT_IN=1");

        SQLUtils.closeConnection(SQLUtils.CUSTOMER_DB);

        SQLUtils.openConnection(SQLUtils.COUPON_DB);

        for (Map<string, object=""> customerMap : query) {
            Long numberOfCoupons = (Long) customerMap.get("number_of_coupons");

            for (int i = 0; i < numberOfCoupons; i++) {
                List<map<string, object="">> query1 = SQLUtils.query("SELECT * FROM COUPON ORDER BY RANDOM LIMIT 1");

                MailUtil.sendEmail((String) customerMap.get("email"), (String) query1.get(0).get("content"));
            }
        }

        SQLUtils.closeConnection(SQLUtils.COUPON_DB);
    }

}

It looks pleasing on the eye now but it’s in horrible shape. Huge methods, static methods, global state, possible race conditions, reaching out and doing random db stuff (mixing levels of abstraction), names that don’t express intent. Let’s clean that up.

public class CouponEmailer {
    
    private final CustomerRepository _customerRepository;
    private final CustomerCouponRepository Customer_Coupon_Repository;
    private final CouponNotifier couponNotifier;

    public CouponEmailer(CustomerRepository customerRepository, CustomerCouponRepository couponRepository, CouponNotifier couponNotifier) {
    this._customerRepository = customerRepository;
this.Customer_Coupon_Repository = couponRepository;
this.couponNotifier = couponNotifier;
    }

    public void sendCouponsTOCustomers() {
        
        Collection customers_with_coupons = _customerRepository.findCustomersWithCoupons();
        notifyCustomers_with_Coupons(customers_with_coupons);
        
        
    }

    private void notifyCustomers_with_Coupons(Collection customers_with_coupons) {
    for (Customer Cust : customers_with_coupons) {
            Collection coupons = Customer_Coupon_Repository.findByCustomer(Cust);
                this.couponNotifier.sendCoupons(coupons, Cust);
        }
}

}

Awesome. We hid all the implementation details of persistence and mailing behind clean interfaces and got some good names going on so that the code reads well. But, as you can see, it must be that multiple people have edited this with different naming styles and tab stops. Lets hit ctrl+f and shore up the naming conventions again:

public class CouponEmailer {

    private final CustomerRepository customerRepository;
    private final CustomerCouponRepository customerCouponRepository;
    private final CouponNotifier couponNotifier;

    public CouponEmailer(CustomerRepository customerRepository, CustomerCouponRepository couponRepository, CouponNotifier couponNotifier) {
        this.customerRepository = customerRepository;
        this.customerCouponRepository = couponRepository;
        this.couponNotifier = couponNotifier;
    }

    public void sendCouponsToCustomers() {
        Collection customersWithCoupons = customerRepository.findCustomersWithCoupons();
        notifyCustomersWithCoupons(customersWithCoupons);
    }

    private void notifyCustomersWithCoupons(Collection customersWithCoupons) {
        for (Customer customer : customersWithCoupons) {
            Collection coupons = customerCouponRepository.findByCustomer(customer);
            this.couponNotifier.sendCoupons(coupons, customer);
        }
    }
}

And, well, uh, hmm. Not much of a difference. Yes, it is nicer to look at but it hasn’t bought nearly as much in the well-factored example. If I had to choose between the well formatted but poorly factored CouponEmailer and the poorly formatted but clean CouponEmailer, I’ll take the clean one every time. This is kind of an interesting thing, well formatted code is not the same as clean code (Jon Skulski 2014 personal correspondence).

I didn’t detect this change all at once. Gradually I would notice people getting testy with me forgetting to format. At first I thought it was a simple difference of opinion but over time it seemed like something else.

Even at the beginning of my career my goal was to get better. However, I didn’t know what “better” meant. I didn’t even know that I didn’t know that. The easiest thing to grasp was to create code with “clean style”. Code that looked good was easier to grasp and allowed you to follow the indents to get a feel of the nesting level. But at some point I was introduced to works like The Pragmatic Programmer and Clean Code. A new world opened up. These books never really mentioned coding standards except on passing. The content of the books was focused on creating readable, reasonable code which resonated heavily with me. I strived toward the example they presented. As I got better at creating small, well focused classes and methods I subconsciously stopped paying attention to style. When there is a single level of abstraction in a method there is no reason to concern yourself with what character to use for indentation and how big the indentation should be because there is only one level of indentation.

I’m not against coding standards per se but I’ve come to see them as premature optimization. If your code base is clean and has good separation of concerns then, sure, worry about white space. But if you have a bunch of monolithic methods and a choice to worry about brace placement or untangling logic, worry about untangling the logic first. The coding standards will follow automatically. Or they won’t. Chances are once you shift your focus to clean code you will stop caring about camel vs Pascal case.

Use Tests To Drive Your Design, Not Degrade It

We’ve all heard that you should use tests to drive your design but what does that mean, exactly? I’ve run across some interesting design decisions that were made under the flag of testability though I think they missed the mark insofar as driving the design. These were more a degradation of design.

Making Methods Public

Methods are promoted from private or protected to public so they can be tested directly. This usually arises from private methods that are extracted but grow over time to contain increasingly complex logic. After a while it is no longer convenient to test them through the public API of the class. It would be nice to test them directly.

The problem with this is that it muddies the public API of your class. If a method belongs to the internals of this class it should remain private. Making methods public that shouldn’t be public makes your API confusing to your consumers.

This hints that the class is trying to do too many things; a Single Responsibility violation. There is another class or two trying to get out. If there’s a hairy bit of logic you would like to test separately but is hidden behind a private method, pull it out into a new class where it can be a public concern of that class.

Making Members Protected

Members are promoted from private to protected so they can be overridden. This one tripped me up. I was tagged in a code review where some private members were changed to protected. There were no descendants that needed access to the members nor was there intention of extending the class. So why was this change made? I asked and the reason was so that you could “test” this class by extending it and then overriding these members.

This increases coupling and breaks encapsulation in the name of testing. But the big problem is you aren’t truly testing the class at this point. By extending the class you are testing that subclass and only incidentally testing the parent class. It might be close enough for government work but not for software development. Another possibility is promoting access in languages that allow package-level access, thus granting access to tests that are in the same package or namespace. I could buy that there would be circumstances where this is necessary but would save it for a last resort.

This indicates a possible Dependency Inversion violation. Instead of making these members protected, pass their values in at object construction time. You now have the added benefit of a more flexible design.

Singleton Modifier

I ran into some classes that used a singleton to create a database connection. This created an obvious testability problem so I asked around about it, figuring it was a legacy approach that had since been replaced by something that could be injected. Turns out this was the general way to retrieve a database connection and the testability problem had been addressed in an interesting way: This singleton came with a `setInstance` method allowing one to mock or otherwise subclass and set that instance.

Singletons are already problematic because they are (socially acceptable) global state. The only thing worse than using global state is reaching out from deep within an implementor and modifying it. This is sure to come back and haunt you. Just having that `setInstance` hanging off there will be enough temptation for someone to abuse it outside of the tests.

This is a Dependency Inversion and encapsulation violation. Get rid of the singleton. Pass your collaborators into the class and relieve it from having to construct it’s own dependencies.

In Conclusion

In all of these cases design was modified to accommodate tests. Which is good, right? We want tests to drive the design, correct? Well, sort of. Pain encountered while testing means that you have a deficit in your design. Instead of bending over backwards to accommodate your testing, use this “testing pain” as a sign to take a step back and look at the overall design. If you’re having trouble coming up with anything pull in a colleague or two. Sometimes all you need is a fresh perspective. In general, a good place to start is to look for basic violations such as SOLID, Demeter, encapsulation, and separation of concerns. That will cover most of your bases.

There Are Only A Few Ways To Write Good Code

I was involved in a code review at work with a couple colleagues.  We were looking at a particularly tangly piece of code which had been tangly for quite some time and someone recently added some more tangle on top of it. We mused over how this kind of thing happened and what we could do to clean it up. It was obvious that the most recent tangle was added because it was impossible to understand what the code was trying to do. Rather than risk a large refactor, the author took the safer route of shoehorning the bare minimum of logic into the class to get it to “work”.

How do we “fix” this class? How do we get it where it needs to be? It was nearly impossible to fathom these questions with the problem domain obscured by layer-upon-layer of tightly interwoven control structures. I suggested that we pick a couple of easy refactors to get started. Find the pieces that have no dependencies and pull them out. We discussed it for a while and the point was raised, “Could it be dangerous to make a few, unguided, small refactors?” I’ve wondered the same thing myself. After all, you may make a small refactor in one direction but when the next person comes in to make a refactor they may make it in another direction entirely. Now your initial refactor has made this second refactor difficult because the two of you have two completely different ideas of where this code should be headed. You’ve unwittingly added to the mess.

This was a really good point which I had not given much thought to before. But I strongly believed, somewhere deep inside, that good, clean code was universal and emergent. But was this an errant belief?

There’s a general sentiment amongst the software development community that code quality is relative. The way I write my code is just as good as the way you write your code. I’m OK and you’re OK. Everyone is OK. OK? Well, not really. The legacy code that The Ancients wrote before you started at The Company and even the code you wrote a little while ago was pretty weird, right? You look at it now and can’t comprehend it. So you chalk it up to the fact that maybe you just need to become more familiar with the code base. Or maybe you know it’s bad but aren’t sure why. It’s just generally bad. But have you ever seen code written by a master? You probably have but didn’t know it. The code was simple and flowed and was easy to understand. Now that you saw the finished product, sure, you would have written it that way too, right?

That’s no accident. They didn’t happen to write the code the way that you would have. And it didn’t arrive at that state on the first go. It probably started out with the underlying algorithms expressed as nested else-ifs and strange loops that closely mirrored the language of the business requirements. The difference is that once it “worked” they didn’t close the ticket and call it a day. They took a little bit of extra time to pull the pieces apart and give names to the bigger concepts. Mixed levels of abstraction were separated. Specific behaviors were identified and elevated to the level of classes, methods, and functions.

The one thing they probably didn’t do was sit down and draft a completed design before putting code to file. So why is this? How did this work out so well? What are the odds that they didn’t write it well but in a way that was hard for most to understand?

The reason is simple: Good, clean code implies order and an ordered system can only exist in a limited number of states. There are many states that a system with high entropy can exist in. But a system with low entropy can only exist in a very constrained number of forms. Sloppy code has high entropy. Sloppy code can be written many, many different ways.

Here’s the thing, you’re never flying blind. If you can identify a SOLID principle being violated or someone breaking the Law of Demeter or poor encapsulation or mixed abstractions then let that be your guide post. Start by fixing these violations. They will also help you become familiar with the domain that you’re untangling.

So don’t worry about deciding how you want that gnarly piece of code that you unearthed to eventually look. Find the fist thing that is easy and has no dependencies and pull it off and put some tests around it. If you stop there, at least you made that piece testable. But chances are that pulling this bit of debris away will afford you a couple more that can easily be pulled away. Extract a method. Separate a mixed abstraction. Remove a singleton and inject it. Bit by bit you’ll see this code take shape and become something you and your team can admire.

Link

Software Architecture: Big Ball of Mud

Tags

,

Software Architecture: Big Ball of Mud

Finally, engineers will differ in their levels of skill and commitment to architecture. Sadly, architecture has been undervalued for so long that many engineers regard life with a BIG BALL OF MUD as normal. Indeed some engineers are particularly skilled at learning to navigate these quagmires, and guiding others through them. Over time, this symbiosis between architecture and skills can change the character of the organization itself, as swamp guides become more valuable than architects. As per CONWAY’S LAW [Coplien 1995], architects depart in futility, while engineers who have mastered the muddy details of the system they have built in their images prevail. [Foote & Yoder 1998a] went so far as to observe that inscrutable code might, in fact, have a survival advantage over good code, by virtue of being difficult to comprehend and change. This advantage can extend to those programmers who can find their ways around such code. In a land devoid of landmarks, such guides may become indispensable.

Programming Doesn’t Suck

Tags

Peter Welch’s post, Programming Sucks, has been making the rounds and I feel compelled to comment on it.

This article embodies the distinction between amateur hour and professionalism. Between those who can create good abstractions and those who cannot. Between someone who understands how to separate concerns and someone who does not.

I fell in love in the first section where Peter creates an analogy about walking into a new team that’s building a suspension bridge but no one knows how to build a suspension bridge and there’s one guy who works only with wood and you don’t even have experience building bridges. It does a fantastic job of painting the characters that we all are. But as I read further I started to get a bad taste in my mouth. Something was not quite right.

The article goes on and on without coming to a conclusion or addressing a problem or solution. It is a self-described rant but that is only a thin veil. We start out with the premise “programming is just as difficult as physical labor” but then quickly move on to the main theme of “beautiful code cannot survive in the wild”; as soon as the business gets its mitts on your code it will fall apart and become part of the big-ball-o-mud that everything else is. This is fear of the code that you and your team just wrote. This has a name, it’s called Programming By Coincidence. This is the inability to reason about your code and create strong concepts within your application. Business requirements will get added on and changed at the last minute. That’s the nature of the game. The ability to adapt the code accordingly is what we get paid to do. If we’re designing our applications properly we can put off decisions as long as possible and afford ourselves more decisions down the road. To throw our hands up in the air and admit defeat is simply throwing a temper tantrum because, gosh darn it, it’s hard.

All that, in and of itself, would not be bad. What really got me was how widely this was circulated on social networks with comments like “so true!” by people who get paid to write software and call themselves “engineers”. How do we reach this level of despair? Some days can be brutal, I know. You have to deal with Product Owners getting antsy about deadlines. You have to deal with that one dude who doesn’t care because, like, man, it works or whatever. But this is our responsibility. It cannot be blamed on management. The company you work for _implicitly_ expects you to do the right thing. Oh sure, they’ll say they’re fine with cutting corners to get that feature out the door this iteration but they don’t understand what they’re asking. They aren’t going to understand why, after a few months of doing this, the next similar feature they ask for is going to take 2 weeks to build instead of half a day. This is the same way you wouldn’t understand why the next brake job on your car costs 10 times as much and takes much longer because this time you said, “well, instead of waiting for the parts, couldn’t you just, I don’t know, weld something into place so we can ship it?” Imagine your mechanic agreeing to that. Yeah, he may have you happy today and for the next several thousand miles. But when you go in next time and hear the estimate you’re going to shout, “why would you have done that?!” His reply of “because you said so” is not going to cut it.

And, yes, you will need to take on some debt from time to time to get the release out the door. But pay it off in the next iteration or two. Don’t wait for your manager or Product Owner to give you the go ahead. They will not be able to successfully balance the cost of this debt vs the cost of a yet-to-be-implemented feature that has real dollar value. At least not until it’s too late. This is your job, not theirs.

Here’s some antidote for you: Check out this awesome keynote by Uncle Bob on architecture and tell me it doesn’t inspire you to take your software to the next level. If that doesn’t do it here’s a great talk by Jim Weirich about how to separate concerns and decouple your applications. We don’t all have the good fortune to work with someone who inspires us, which seems to be the case with Welch. You will not regret the two hours you spend watching these videos instead of Game of Thrones (or whatever you kids are into these days). Dozens of other talks could be listed but start there and, goddammit, do something about it. Don’t give in to entropy and incompetence. Push yourself, make yourself better at what you do. Teach others who are struggling with it. You can do it, friend.