Skip to main content
Software Alchemy

The Art and Science of Software Development

Basic Concepts of Software Design and Architecture

Software Architecture


This is the first in a series of blog entries in which I will elaborate on progressively more advanced subjects, beginning with fundamental software patterns, practices, principles and conventions; culminating in the establishment of an architectural template demonstrating how to build enterprise business applications which can be deployed to the Microsoft Azure cloud. Along with this and future blog entries, I'll be using a demo application for a fictional organization to demonstrate the concepts I discuss. If you'd like to check out the source code in advance, you can find it here.

Laying the Groundwork

If you want to pursue software development as a career or even just as a hobby, you must be able to reason and think abstractly, but what exactly does this mean?

What is abstract thinking? I define it as being able to create generalized mental models of specific processes, objects, or occurrences from the real world or a specific domain, which you can then use (i.e. "apply") to solve problems much more easily in the future. Essentially, abstract thinking is dealing with concepts and ideas which do not exist in nature. Mathematics is the most obvious example of abstract thinking in our lives, which we take for granted. Sometimes as software developers, we also take for granted how much we are dealing with abstractions. This is especially true if we are self-taught, which is pretty much all of us, even if we have one or more Computer Science degrees.

Abstract thinking is something that doesn't seem to exist in the animal kingdom outside of higher order primates, and in human beings it developed over a long period of time. According to Morris Kline in Mathematics for the Nonmathematician: "The Egyptians and Babylonians did reach the stage of working with pure numbers dissociated from physical objects. But like young children of our civilization, they hardly recognized that they were dealing with abstract entities. By contrast, the Greeks not only recognized numbers as ideas but emphasized that this is the way we must regard them [my emphasis]." This was the inception of modern mathematics as we know it, and the basis for modern information technology.

What is reasoning? It is the process of understanding and forming new mental models from outside information. There are several types of reasoning but the three most important ones for software development are: reasoning by induction, reasoning by deduction, and reasoning by analogy. Kline describes them as:

  • Inductive reasoning: "The essence of induction is that one observes repeated occurrences of the same phenomenon and concludes that the phenomenon will always occur."
  • Deductive reasoning: "In deductive reasoning we start with certain statements, called premises, and assert a conclusion which is a necessary or inescapable consequence of the premises."
  • Reasoning by analogy: "to find a similar situation or circumstance and to argue that what was true for the similar case should be true of the one in question."

Software Is an Abstraction, Built Using Reasoning

Software is so ubiquitous in the 21st century that even laypeople talk about it all the time ("I just downloaded this great app…") but nobody really stops to think about just what it is. We know that software is comprised of machine instructions that are processed by the CPUs of physical machines and/or virtual machines, but it is much more. Software is an intangible abstraction which has real-world applications, such as providing information to decision-makers (i.e. "business intelligence"), providing answers to specific problems, operating industrial machinery, and so on.

I'm assuming that you are programming using a high-level language (C#) which is object-oriented and operating several levels removed from the physical processor that ultimately executes your code. So now we're in an entirely different world, in which the basic building blocks for our systems are the language keywords, programming constructs and .NET Core framework classes which we use to build software. In this sense, we might use "level of abstraction" to discuss the degree to which something in our code is separated from these basic building blocks.

As a basic guide, we can enumerate these levels from most abstract to least abstract, like this:

  1. System
  2. Subsystem
  3. Layers
  4. Components
  5. Classes
  6. Data and methods

In general, we draw a hard line between the Layers and Components levels of abstraction: levels 1 through 3 are at the architectural level, levels 4 through 6 are at the code level. This will be important going forward.

Abstractions and Concretes

Once again, our focus is primarily on the object-oriented programming (OOP) paradigm here, in which I'll be using C#, though I may mention F# here and there just to show you what's possible. Within the OOP world we often talk comparatively about types or components being concrete vs. abstract. For example, a class which writes a line of text to a file somewhere on your file system is regarded as a concrete implementation, whereas a .NET interface which defines something that writes a line of text somewhere—the file system, a database table, cloud storage—is an abstraction. Certain technologists, such as Mark Seemann will argue that interfaces are not abstractions, they aren't code contracts, and so on. Furthermore, the waters have become muddier since C#8 now allows interfaces to have default implementations. Since this is an introductory blog series and I don't want to confuse you, I'll use these terms in the most simplistic sense, and generally ignore things like default interface implementations.

Be aware that the term "interface" may not necessarily refer to explicit interfaces under .NET, but may also refer to things like the programmable surface of web applications, cloud services, or even hardware devices. In general, we refer to these as Application Programming Interfaces, or APIs. These typically operate at a higher level of complexity than abstractions in a programming language. See the discussion below on toolkits, frameworks, and APIs.

What Is Software Architecture?

What is software architecture? I define it as two separate concepts.

  • As an abstract noun: the process of interpreting and negotiating business requirements and using those to produce high-level technical schematics from which a software system can be built. It is this high-level plan which informs the software development process from inception to launch.
  • As a concrete noun: a well-defined and reusable template for building specific types of software systems.

I typically use the term in both ways but will make an effort to use "architectural template" for the second.

Software Architecture Is Hard

Software architecture is very different from, and some might say harder than, physical (building) architecture. Software architecture is abstract, multi-dimensional, and highly dynamic. When building a software system, the materials (i.e. "bricks") are lines of code, which have a negligible cost. However, since we are dealing in abstractions and intangibles, it is harder to lock down both requirements and a finished design. As compared to real-life construction projects, software projects can quickly morph into multi-headed hydras, tearing apart budgets and timelines with terrifying thoroughness as the team struggles to deliver… something, anything. This is why modern software delivery practices such as SCRUM/Agile have become all the rage in the 21st century. It is also why you should care about software architecture concepts, and the software design principles that underlie them.

Design vs. Evolution

It could be argued that software systems are evolved, not designed. This is an over-simplification. It is true that you cannot design an entire system at one time, down to every last detail, and even if you could the business users would change the requirements on you as soon as you deployed it to production. In practice software systems begin with an initial design which is constructed from requirements, and then that system evolves over time as requirements change and new features are added. Oftentimes, systems change not just in response to changing requirements, but in response to an elucidation of requirements that happens over an extended period through an ongoing dialog between the technical team and the business team.

Software systems are complex—that's just the nature of the beast—and the primary imperative of software development is managing complexity (see Code Complete 2 by Steve McConnell for more on this). There is a constant struggle that software teams engage in throughout the life cycle of the software system in their efforts to reduce or eliminate unnecessary complexity, while taming and controlling necessary complexity. In dealing with this complexity while working against the clock and shifting requirements, the product emerges over time.

Emergence: complex systems develop out of simple sets of rules.

This last point is important, and many experts would agree with me, which is that software systems are emergent in nature. A system built upon sound principles, utilizing good patterns and practices, and following consistent conventions will evolve gracefully over time, be easier to maintain, and have greater longevity. As a knock-on effect, it will save enormous amounts of time and money. Conversely, a system built without any mind to principles, utilizing anti-patterns and bad practices, and having inconsistent conventions will likely miss both time and budget objectives in its initial creation, and it will both devolve and degrade over time as more people work on it. As a result, it will likely cause developer burnout, have a much shorter lifespan, and waste enormous amounts of time and money which are difficult to quantify. This is critical, and I will explain more below.

The Process: Planning and Execution

"Failure to plan is planning to fail." - Old adage

"Everyone has a plan 'till they get punched in the mouth." - Mike Tyson

I've argued this point at great length with other developers, and there still is no universal consensus: how much planning is enough, and when is it time to hit the ground running? Think of this as a continuum which exists between two stereotypes. At the far end of that continuum you have the extreme "Agile" devotees, who believe that all upfront planning and design is anathema to modern software development, it's anachronistic, and so on. They jump into battle with nothing but their wits, the knowledge they've gained from a college/code camp or self-training, and a youthful exuberance and willingness to throw down long hours until the project is done. Sadly, this is far too common, even among teams of experienced developers, and the result is often a Big Ball of Mud, or BBM. Derogatively, we might refer to these types as "duct tape programmers," and the product of their efforts is a system which is unmaintainable, even by themselves.

At the other end of the spectrum you have the stodgy, structured, ultra left-brained types who believe that EVERYTHING must be planned in advance and formalized into epic volumes of specifications documents with every "i" dotted and every "t" crossed. Long, tedious whiteboarding sessions involving committee discussions and political negotiations between teams occur over weeks or months of time until a consensus is reached, and business/systems analysts can compile the results into an executable plan. Those official documents are then ceremoniously handed over to the development team who, in lobotomized fashion, execute the plan to the letter with zero feedback or critical thought involved, typically following a waterfall type process. What do you think the result of this is? The answer: a system which is unmaintainable in a different kind of way!

"As software developers, we fail in two ways: we build the thing wrong, or we build the wrong thing." - Steve Smith & Julie Lerman, Domain Driven Design Fundamentals

Over-engineered solutions fail as well because nobody is omnipotent, requirements are ALWAYS missed in every project of this type, and using a command-and-control project management style deprives the team of its most valuable resource—the creativity, critical thinking, and problem-solving capacity of its constituent members, down to the developer who is punching in each line of code. The result is a system that is arcane, opaque, badly-documented (both formally and in terms of the intelligibility of the code itself), and often hacked together as just like your garden variety BBM, because the team was cutting corners in order to stay on schedule. It was built in a vacuum without feedback between the developers and the stakeholders, so it's likely to have missed the mark entirely in regard to requirements, which is ironically what this style tries to avoid in the first place. Agile enthusiasts derogatively call this approach Big Design Up Front, or BDUF. According to legend, this used to be the norm in all corporate software environments, especially in large organizations. Modern software practices such as Agile and Domain-Driven Design came about in response to this. What they all have in common is that they seek to tighten the feedback/execution loop so that you can plan accordingly while responding to changes.

So what approach is best? Is there a best approach? It all depends on the situation. The process is more art than science, and it depends upon so many factors, many of which are impossible to quantify, such as:

  • The temperament of the team and stake holders
  • The skill level of the team as a whole, and of individual members
  • Cost/time constraints
  • Politics
  • and so on…

If I could give a generalized recommendation regarding the process, it is this: get a good sense of the lay of the land, and then apply the Pareto Principle (the 80/20 rule). Either plan 20% of the solution up front and then execute, or plan 80% of it up front and execute. Be flexible, non-dogmatic and open-minded, and the path will become clear to you. This is my first mention of the Pareto Principle in this blog series, but you will see it pop up again and again, as it seems to be applicable to so many concepts in software development.

As a final word, I will state vociferously that the best software development lifecycle (SDLC) process in the world won't save a project if the developers don't have a solid grasp of fundamental concepts, and I'm not just talking about language features or the latest cool framework. Remember what I said above about software systems being emergent? That's right, the success or failure of any software project depends upon the knowledge and skill of each team member at a granular level. I'm talking about the concepts and behaviors that the team members understand and abide by, which is the topic of the next section.

Patterns, Practices, Principles and Conventions

Here it is again: software systems emerge from a primordial conceptual soup of patterns, practices, principles and conventions that the developers understand and know how to put into practice. These are the basis for both the high-level architecture and the code-level constructs that comprise the system. This is where the rubber meets the road, so to speak. Here's what those terms mean, and the distinction between each of them.


These are a colloquial set of rules that we as developers have learned over time and apply to our work. Sometimes they are explicitly stated or documented. Other times we learn them by imitation or induction. An example may be where a developer chooses to put business logic—in the UI, in the controller of an MVC app, in a stored procedure of a database, or in a specialized business layer. Practices are functional: altering these will fundamentally change the way a software system operates under the hood.

Some practices are "good," and some are "bad," though there is often debate, as these can be subjective. In certain situations, there is overwhelming consensus that something is either beneficial or detrimental to software development. For instance, it is generally agreed that cutting and pasting code all over the place is detrimental as it leads to software that is hard to maintain.

As you improve your craft, your intuition will often inform you when you are encountering a possible detrimental software practice. For example, you may be looking over somebody else's code and think to yourself, gee, every time I use this API class, I have to invoke a series of initialization methods in a very specific order or it throws an exception. Something isn't right here. Or perhaps, in order to test for the non-existence of something through an API call, I must perform an arbitrary operation against that object and then catch an exception if it doesn't exist. When you experience this sensation, you are in the presence of a code smell. A code smell may indicate that something is out of whack, or you may have had way too much coffee and shouldn't have been coding for 12 hours straight, and you need to take a break. Going back to our examples above, the first instance may or may not indicate a bad software practice in that the initialization methods could have been combined into a single method or eliminated altogether, depending upon the use case of the API. In the second instance, this almost certainly indicates a bad practice on the part of the API designers, as testing for the existence of something is not an error condition, and the API should give us a better way to do this rather than forcing us to use exceptions for flow control in our logic.

Why Practices Are Important

Practices are granular behaviors which have a fundamental impact on your productivity and the quality you deliver. Like anything else in life, bad habits/practices lead to poor results. If you use improper form while weight training at the gym, you are likely to injure yourself. If you use bad practices when developing software, you are likely to build solutions which are convoluted, difficult or impossible to maintain, don't do what the customer wants, wind up wasting a ton of time and money, and ultimately injure your career.

Maintainability is the most important consideration when building software. As software evolves over time, it tends to become less maintainable, not more so. Using good practices produces software that has a greater longevity and is less tedious to maintain over the long-term. Using bad practices causes software solutions to accumulate technical debt much more quickly, ultimately driving the solution over what I call the Cliff of Maintainability. This is the point at which incremental changes to the system are prohibitively costly to implement, and it is no longer feasible to keep a system running either in terms of time or money. A complete overhaul is required. You owe it to yourself, your team, your company, and the world at large to keep learning and using the best tools and practices that are at your disposal.


These are formalized methodologies or prescriptive ways of implementing something or solving a specific problem. They usually have an explicit name assigned to them—e.g. "Strategy Pattern." Design/architectural patterns typically consist of many practices grouped together into a cohesive technique. An example might be one of the Gang of Four design patterns, like Mediator; or at an architectural level, Command/Query Responsibility Segregation (I will cover this one in depth in a later blog post). Once again, these are emergent—they are "discovered" by software developers through the process of creating software. Some of the best ones have been refined over time by some very sharp technologists using deductive reasoning.

Just like practices, some patterns are "good," and some are "bad," though there is a gray area here as well. Bad patterns are referred to as anti-patterns. Junior to mid-level developers may be able to detect anti-patterns using their code smell sense. Senior developers and architects can often immediately spot an anti-pattern, call it by name, and explain the detrimental implications of that pattern to the system and the business—e.g. "Looks like they're using Entity-Attribute-Value in the database. This will make it extremely difficult to write queries against this table and it will probably become unmaintainable in two years."

Why Patterns Are Important

Patterns are important because they provide recipe-like, reusable solutions to common problems, which helps accelerate development efforts. They provide a common language and lexicon for quickly describing complex concepts to other developers and architects, which helps eliminate ambiguity and increases productivity. Because they represent accumulated knowledge, sometimes acquired through trial and error, these can help to avoid common pitfalls that could waste time or even compromise the entire software project.


Principles are similar to patterns and practices but operate at a higher intellectual level. I define software principles as abstract and rather philosophical prescriptions for how to build software systems. Principles differ from patterns in that they are more generalized, open to interpretation, and they may be applied differently in various situations depending on the context. They usually do not have any kind of formalized schematic, algorithm or recipe that you can follow to apply them—maybe at most an abstract diagram or analogy. What's interesting, however, is that a given principle may be hiding behind heuristics (mental shortcuts), practices, or patterns. For example, when designing an API method which accepts a collection type as a parameter, you may wish to always use the most generic (abstract) type that you can, typically IEnumerable<T>. The hidden principle is Postel's Law, described below.

I believe that these represent pieces of deeper wisdom, as opposed to knowledge, about building software systems. Principles, by their very nature, demand a certain requisite level of first-hand experience in order to be applied productively, which is why you're more likely to encounter these being espoused by a senior level developer or architect, as opposed to a junior developer. Still, anyone can learn them, and the more you know before you dig in and get your hands dirty, the more proficient you will become over time.

Here are some of the most important principles you're likely to encounter when developing software, and a brief description of each.

KISS (Keep It Simple Stupid)

This is described by Jason Taylor as "the most important principle in software development," and I basically agree [Clean Architecture with ASP.NET Core 2.1]. Now, the definition of "simple" is subjective and could lead to debate (or torturous, hours-long whiteboarding sessions), but the basic nugget of wisdom here is that less complex systems are easier to build and maintain. Note that this is completely in line with Steve McConnell's assertion that the primary imperative of software development is "managing complexity." When designing software systems, simple and elegant is better than "clever" and convoluted.

YAGNI (Ya Ain't Gonna Need It)

Related to the above, think of this as Occam's Razor for your code. If you find a bunch of logic that doesn't seem to be used anywhere or if you're engaged in "preemptive coding," take a step back and ask yourself if it's really needed or if it's just making the system more complicated. Tools built into your IDE, such as CodeLens in Visual Studio, can help enormously with this.

DOGBITE (Do It Or Get Bitten In The End)

This is the counter argument to YAGNI. Sometimes you need to build your solution a certain way because you know from experience that it will have to meet certain conditions in the future. This is related to another Steve McConnell concept, which is that it's far cheaper to fix a problem upstream in the development process than downstream. Likewise, if you know for a fact that a customer will request a feature at some point in the future, it could be cheaper and more maintainable in the long run to include it at the outset.

DRY (Don't Repeat Yourself)

Copying and pasting the same code construct all around your solution does not make for maintainability. What's more, if you must make a change to that construct in one place, then you wind up making that change in all the places, which wastes time and is error prone. A good example might be a certain try/catch block to handle an error condition. Aim for code reuse, which means having that logic in one place and calling it from the components that need it.


This is the idea that everything related to a certain piece of data or functionality is together in one place. Furthermore, internal details are hidden from outside agencies, not necessarily because of security concerns, but because details are a mental burden and the system becomes more comprehensible if you keep those out of the way. This is related to the notion of information hiding.


Piggybacking off the last principle, it's worth mentioning cohesion. This is simply a determination of how well a certain group of components/classes work together to accomplish a common purpose. Aim for high cohesion in your designs. An example of high cohesion is a class which exposes a bunch of extension methods which perform different kinds of string manipulation. An example of a class with low cohesion might be something that exposes some string manipulation methods along with methods to send an email or perform a numerical calculation. Don't do this.

Separation of Concerns

Proceeding logically from the last point, I'd like to mention separation of concerns. This is another way of saying that you should not mix your peas with your carrots, and you should not mix user interface logic in with your business logic. You'll find this discussed much more at length regarding architectural concepts, but it basically says the same thing as aiming for high cohesion: like things go together, and components which have entirely different purposes should be kept apart.

Loose Coupling

Classes, components, layers and the like should be flexibly built so that they can be unplugged from each other without causing cascading changes to the system. This is called loose coupling. Think about it: how safe would you feel driving around in a car in which the power steering system was permanently fused to the radio, which was in turn fused to the tail lights, so that if one of them stops working then all of them stop working? In the same vein, the components you build in your system should not have hard dependencies on each other (pay attention—I'm building toward a critically important apotheosis here). I'll explain how this is done in practice below.

Explicit Dependencies

The components and high-level modules you build should be loosely coupled, but they should also make it entirely clear what they depend upon in order to function. Transparency is the order of the day, and there shouldn't be any mystery as to what's required to use them. Think compositionally and take careful consideration of how the pieces of the system need to interact.


And now, the finale… we've been building to this point, and I'm now ready to introduce you to a set of principles which are collectively referred to as the SOLID Principles. These were organized together and assigned that acronym by Robert C. Martin (who we all refer to as "Uncle Bob"), even though he didn't invent all of them. The importance of these cannot be understated, and I highly recommend that you read more on these in depth.

The SOLID principles are:

  • Single Responsibility. Classes and high-level components should do one thing, only one thing, and they should do it well. Abiding by this principle results in code that is highly cohesive.
  • Open/Closed. Originally developed by Bertrand Russell, this states that classes should be open to extension but closed to modification. Adding new features to a system should not trigger regressions, or breaking changes.
  • Liskov Substitution. Introduced by Barbara Liskov, this states that you should be able to take a more specific type of something and treat it as a more general type without breaking anything. This is based on the notion of substitutability.
  • Interface Segregation. Think of this as Single Responsibility applied to interfaces. When you are building out abstractions (i.e. interfaces), how much pain is involved in implementing them, and are all the methods of each interface cohesive? The extreme version of this is what Mark Seemann refers to as role interfaces—interfaces with a single member—and they make for much more maintainable and extensible systems.
  • Dependency Inversion. Of all the SOLID principles, I consider this one to be the most important, and it is the basis for all cleanly-designed modern software systems. Quoting directly from Wikipedia, the Dependency Inversion principle states:
    • High-level modules should not depend on low-level modules. Both should depend on abstractions (e.g. interfaces).
    • Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions.

All the SOLID principles influence each other and strive for the same objective, which is simple, maintainable code. However, I'd like to touch on the Dependency Inversion principle some more because it is so important. Sometimes referred to as Inversion of Control, in a nutshell what this principle does is provide the direction we need toward creating loosely coupled, extensible components. It is saying that you:

  • Should not create new instances of concrete objects directly inside the classes that depend upon them. As the Gang of Four point out in their seminal book, Design Patterns: Elements of Reusable Object-Oriented Software, object creation is one of the primary ways that different sections of code get tightly-coupled together. No bueno.
  • Should not couple your interfaces to concrete types. There's a nuance here that I'll explain.
  • Should provide interfaces to the dependencies that your components need. This is done via dependency injection.

What is dependency injection? It's a topic which is complex enough that entire books are written on it, but I'll explain the basics here. It is the means by which we achieve dependency inversion, and that is by not allowing any of our business classes and core logic to create instances of the classes they depend upon. Rather, we build these classes so that they receive their dependencies as parameters to their constructors (constructor injection). Those parameters are often interfaces and other abstractions, but they don't have to be. By not having to worry about instantiating their dependencies, our core business classes can focus on doing what they do best. An example may be a class which reads a bunch of user data from a database, aggregates it, and writes the results to a spreadsheet file. That class should receive a persistence interface which allows it to read the database data and a file system interface which allows it to write to the file system. The details of those interfaces are irrelevant. Our business class just expects them to work when it calls methods against them as part of an implicit code contract. Notice how we've managed to follow the spirit of multiple principles here: Dependency Inversion, Single Responsibility, High Cohesion, Open/Closed, and so on. In the finished solution, concrete implementations of these dependencies will need to be injected into the classes that need them. How you go about this is a design decision, but a common approach is to use an automated tool called an Inversion of Control container, or IOC container.

Additional Principles Worth Mentioning

Persistence Ignorance

This follows logically from the Dependency Inversion principle, and it simply states that systems should be agnostic of their underlying data store, whether that's a database somewhere, the file system, or some kind of storage medium that has yet to be invented. This is a principle that seems great on paper, but in practice is difficult to achieve without ending up with leaky abstractions, which is to say, details of the underlying data store being exposed through an interface that it's abstracted behind.

Postel's Law (The Robustness Principle)

This is a principle that I'm fond of, because it makes for interfaces and components that are easier to use. It simply states that methods should be very accepting of their input and very specific of their output. A good example in .NET might be a method which accepts a general type like IEnumerable<T> as a parameter and returns an extremely specific type such as List<T> as its result. By using this pattern, you can do things like use LINQ expressions in the method call, and you always know EXACTLY what you are getting back, and don't have to violate the Liskov Substitution principle.

Convention Over Configuration

Finally, there is the principle of Convention Over Configuration. This just means that the way you name classes, methods, etc. or the way you structure your solution has functional significance in how your solution operates. This is important because it allows you to use a declarative (tell what you want) vs. imperative (tell exactly how to do it) style of programming which is extremely powerful and reduces complexity of the finished solution. A good example is registration by convention, which is instructing your IOC container to automatically scan through class libraries and register concrete classes with their corresponding interfaces automatically, based upon some criteria. This saves you having to write long, tedious registration methods to do this manually. I talk more about conventions below.

Why Principles Are Important

Principles, as well as good practices, help keep you on the happy path. This is an informal term describing what it is like when the development process is going well, obstacles are easily overcome, and square-peg-in-round-hole solutions are avoided.

"Force developers into the Pit of Success. Make it easy to do the right thing and hard to do the wrong thing." - Jason Taylor

Principles help guide the solution in the right direction, even when it gets confusing, requirements change, or other situations emerge which can cause you to feel overwhelmed. They provide a logical framework for understanding why something is a good pattern or practice. For example, the Dependency Inversion principle explains why using dependency injection is beneficial to building modern applications. Principles also inform executive technical decisions about software solutions at a higher level than patterns or practices. They determine which patterns/practices should be employed in building the solution, and which ones avoided.


Conventions are stylistic guidelines for structuring your code, naming files, classes and methods, or otherwise altering your software solution at a cosmetic level. They differ from patterns/principles/practices in that changes in convention may or may not result in changes to the actual compiled code when you build your solution. Note that changing how you name something under .NET Core, even if it's the parameter to a method, could break compatibility against previous versions. For this and other reasons, it is worth having clearly defined coding conventions which are agreed upon by everyone on your team, even if you are a team of one.

When it comes to the application of conventions, consistency is key. Having consistent conventions makes your code more readable and thus more maintainable. It also makes it easier to merge your code into source control and handle the inevitable merge conflicts that arise when working on larger teams. Also, as mentioned above, conventions may have an actual impact on configuration or behavior of production code when using certain tools.

Conventions come and go in and out of vogue much more frequently than patterns/practices/principles. For example, Hungarian notation was all the vogue in the late 1990's and early 2000's. Now it is strongly recommended that you DO NOT use this, ever. Note that in .NET, prefixing interfaces with the letter "I" (IDisposable, IEnumerable, etc.) is technically a remnant of Hungarian notation. I recommend that you continue to use this convention. If you ever have any doubt about whether a certain convention is beneficial, emulate the pros. Go out and download source code from GitHub, including the official Microsoft .NET Core source, and see what they do. There are some great books you can read as well. In my case, the conventions I employ are heavily influenced by the book Framework Design Guidelines, by Krzysztof Cwalina and Brad Abrams. For instance:

  • I never use underscores to prefix private fields in a class.
  • I typically put properties near the top of classes, followed by public members, etc.

There are automated tools, both paid and free, that you can use to enforce conventions. For example, the Framework Design Guidelines conventions were codified into a tool called StyleCop at one point. Most of those rules are now built directly into Visual Studio. I won't name any paid tools, but one that I use frequently is Code Maid, which I appreciate because it's free, flexible, powerful, and basically does everything that I want.

Why Conventions Are Important

Conventions are important because they make your code base much more readable and give the source code a cohesive, professional appearance. This is especially important for open-source projects or corporate solutions being worked on by multiple people, because not having consistent conventions can adversely affect the usability of your solution. As already stated, they make merging more seamless and they may affect the behavior of automated tools. Overall, employing good conventions will help your solution evolve more gracefully over time and contribute to its maintainability.

Patterns/Practices/Principles/Conventions in the Real World

There are a lot of ideas that look great on paper but are difficult to implement in practice. Beware of the trap of perfectionism. Even Eric Evans, the creator of Domain-Driven Design (more on this in a future post) warns about this danger. Oftentimes, ideals are at odds with reality. You need to use common sense along with your accumulated experience in many situations to know whether something is appropriate to solving the problem at hand. It's been said so often in software development that it's basically a cliché, but I'll say it again: there is no silver bullet. Everything is a trade-off or compromise. It is your wisdom which will tell you whether to use a certain pattern or practice, and what the pros/cons will be.

Principles can often be at odds with each other, and you must judge which one is more important. Conversely, some principles almost always take precedence (e.g. loose-coupling is more important than DRY) [Vladimir Khorikov, CQRS in Practice]. There are exceptions to rules and sometimes you have to break the rules. You can find cases where I've done this in the demo solution. The Pareto principle (here it is again) frequently applies:

  • You may only be able to apply a certain software principle to 80% of your solution, or you can only apply 80% of that principle. For the other 20% you must use a different approach.
  • 80% of the usage of your system may follow a certain pattern. For example, perhaps 80% of the interactions with your system are queries against the data store and the other 20% are updates to the data. More on this in the blog post on Clean DDD.
  • Maybe 80% of your users fall into a certain category, and the other 20% are a special case.
  • Maybe you only need 80% code coverage for your solution when writing unit tests.
  • So on, and so forth.

There is no hard-and-fast in software development, and often you'll need to trust your inner sense of judgment. This is exactly why I consider this profession to be more art than science.

Toolkits, Frameworks, and APIs

I realize that this blog entry has run long, but I want to make you aware of some other terms you'll run across again and again.


A toolkit is a third-party software package, which might be either paid or open source, that you plug into your software system in order to accomplish certain tasks that you otherwise wouldn't want to have to write code for yourself. A good example would be MediatR, by Jimmy Bogard, which I will be discussing in more detail in the blog post on Clean Domain Driven Design. This package provides a comprehensive implementation of the Mediator design pattern and an in-process messaging system. There are NuGet packages out there for basically everything, so do your homework when you are planning your solution. Trying to reinvent the wheel and write code for something that's already been done is what we call "rolling your own," and I really don't recommend it unless you want to take a trip down the path of pain (to play devil's advocate, if you're starting out this can be a good way to learn, but if you're building enterprise solutions it'll just chew up your time).


A framework is a comprehensive collection of tools, APIs, and other building blocks that act as the foundation for building software. For example, .NET is a framework. It includes compilers for various languages, the Common Language Runtime, and a number of framework packages that you can include in your projects. "Toolkit" and "framework" are sometimes used interchangeably, but the big distinction is in how comprehensive it is. Toolkits are generally smaller and have a narrower focus. If you get confused just remember that you build with a toolkit; you build on top of a framework.


As previously discussed, "API" stands for Application Programming Interface, and it refers to the outwardly visible classes, functions, methods, components, or other pieces that you will directly work with when using a toolkit or a framework. Think of it as the control panel for a machine, the implementation of which is hidden from you, possibly inside a (literal) black box. APIs might be part of software components that you download into your solutions using a package manager, or they might be external services that you communicate with using some network protocol, typically REST or a message bus. Note that APIs also encompass not just the methods and functions and components that comprise the API, but also the way in which you interact with them. This last point is important, because certain APIs will have greater or lesser amounts of ceremony (typically configuration steps which may seem asinine or tedious) involved in working with them.

A Final Word

You made it this far, so I'd like to reward you with a few Buzzword Bingo cards that you can download here and here. The next time you're on the phone with a recruiter, see if you can get five in a row!

Seriously though, I'd like to provide you with some sage advice that will help you in your life and your career as a software developer. Let's call this the inner game of software development. I could write an entire blog series on this, but I'll keep it brief.

#1: It's Okay to Fail

If someone in your profession (especially if that person is in a managerial position) says "failure is not an option," my best advice to you is to turn around and run the other way as fast as you can. Expectations of perfection from other people are a sign that you're in a hell job, and you don't deserve that. The fact of the matter is that both machines and mammals learn by making mistakes, correcting course, and then remembering the correct approach. Making mistakes is okay, and you are not your mistakes. Just make sure you learn from your mistakes, and when you do inevitably screw up, try to fail fast and fail small.

#2 Go Easy on Yourself

We all have an inner critic and beat ourselves up. The voice of that inner critic manifests as imposter syndrome, and it's extremely common in this profession. If you don't know what that is, just imagine an overwhelming feeling of insecurity because "you are a phony, you were never cut out for this, blah, blah, blah." I experience it all the time too, and I've been programming computers for most of my life. No joke. I taught myself DOS 6.0 commands and started writing programs in a language called Quick Basic on a 286 when I was a kid (if you don't know what any of those are, then yes, you are a Millennial). You know what imposter syndrome really is? It's your ego messing with you. If you want to short-circuit imposter syndrome and silence your inner critic, here are some suggestions:

  1. Stop comparing yourself to other people. I can't stress this enough. You are your own person, with your own talents and shortcomings. Don't measure yourself against somebody else's bar. This is another cliché, but here it is: the only person you should be comparing yourself to is YOU, yesterday, last week, last month, last year.
  2. Celebrate your wins, however small. Remembering your past successes will help you keep pushing ahead, even when you feel that old insecurity popping up.
  3. Surround yourself with positive people. It's hard to deal with stress and the daunting presence of the unknown if you have the psychic baggage of negative people weighing you down. Seek out people and communities that are supportive and non-judgmental, and it will help you immensely in your journey.
  4. Strive for continuous improvement. Read, read, read! Keep learning and growing, and acknowledge the progress you make along the way. Incremental results do add up.

#3 A Mentor Is a Gift. Otherwise, Find Some Good Role Models

I define a mentor as an individual who has an active and personal involvement in your career development, with whom you can consult at length for professional advice and has an interest in seeing you succeed. Think of a mentor as Yoda: he or she teaches you individually, responds to your questions, and uses his/her resources to clear the way for you to fulfill your full potential. Very few people realize the awesome privilege of having a mentor but if you do, then consider yourself lucky. The rest of us, myself included, haven't had that opportunity but there's an alternative—finding good role models. What is a role model, as opposed to a mentor? I define a role model as a person that you may or may not know personally, and may not even still be alive. For example, Abraham Lincoln is on my list of role models. I encourage you to make a list of people you admire and then find out as much about them as you can, figure out what makes each of them tick. Then, emulate the behaviors or qualities they embodied which accounted for their success or otherwise made them good people. You'd be surprised at the knowledge and wisdom you'll gain from this that you can apply to directly to your own life.

Parting advice: just remember that your journey is a marathon, not a sprint. There will be setbacks and obstacles that take time to overcome, but with the right attitude, commitment and willingness to learn, you can and will succeed.


In this lengthy blog entry, I explained the basic types of thinking that are involved in the effort of software development, and problem-solving in general. I laid out some basic software design and architectural concepts and discussed briefly how software is actually built. I mentioned patterns, practices, principles and conventions, and how those influence the software development process. Finally, I gave some sage wisdom representing the "inner game" of software development to help you along your way. There's one more thing I forgot to mention… make sure you're having fun with it too!


Morris Kline

Vladmimir Khorikov

Robert C. Martin (Uncle Bob)

Eric Evans

Jason Taylor

Steve Smith

Julie Lerman

Mark Seemann

Steve McConnell

Jimmy Bogard

Framework Design Guidelines (Krzysztof Cwalina and Brad Abrams)

Gang of Four (Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides). Foreword by Grady Booch.


Code Maid

This is entry #4 in the Foundational Concepts Series

If you want to view or submit comments you must accept the cookie consent.