Home
Tags Projects About License

What Are The Best Software Engineering Principles?

What Are The Best Software Engineering Principles?

Tell me, dear developers, has anyone ever managed to create a working project in real life, which complies with all Best Practices, has no crutches, has nothing to complain about? If so, why do you lie to yourself and others? Best Practices are only theory, which not only ceases to stretch into practice over time but also begins to contradict themselves.

We, developers /architects/ managers people struggle with lack of attention, stupid mistakes and misprints, personal problems, bad moods, and cold coffee. That's why we need a small set of great underlining principles for engineering things.

Software Engineering principles are a set of recommendations that engineers should follow during program implementation if they want to write beautiful, clear, and maintainable code. There is no magic wand that can turn a mishmash of variables, classes, and functions into perfect code, but there are a few tips and hints that can help an engineer determine if he is doing the right thing.

Measure twice and cut once

Measure twice and cut once

If you take only one principle out of this post, it should be this one.

For me as an engineer, this principle means that before you start building functionality, you should first think about choosing the right problem, choosing the right approach to the problem, choosing the right tools to solve the problem, assembling the right team, choosing metrics to measure and monitor the end solution.

Only then can you start implementing on a pre-designed plan even if you are not completely sure about it. This is engineering at it core.

Don’t Repeat Yourself (DRY)

It is a rather simple but very useful principle, which states that repeating the same thing in different places is a bad idea. This is primarily due to the need to maintain and modify the code further — if some code segment is duplicated in several places inside the program, which lead to the following problems:

  1. If you make even small changes in the source code, you will have to change the same code in several places. This will require additional time, effort, and attention (often it is not easy).
  2. You or another developer on your team may accidentally miss one of the changes (this can happen just by merging branches in VCS) and encounter subsequent errors in the application. These bugs may frustrate you because you have heard that such a bug has already been fixed.

Because of the above, there is a recommendation — if any code occurs more than twice in the codebase, you should think of moving it in a separate function. In fact, you should consider creating a separate method even if you encounter repetition a second time.

Keep It Simple Stupid (KISS)

Keep It Simple Stupid

If you hear hoofbeats think horses, not zebras.

Some think that this idea transformed from Occam’s Razor philosophical principle. You can interpret it as follows: one should not create extra entities to the system without a strong necessity. It is always a good idea to first consider the usefulness of adding another method/class/tool/process, etc.

After all, if you add another method/class/tool/process and get no benefit other than increased complexity, what's the point?

Remember what Peter Hintiens said: "Simplicity is always better than functionality".

You Aren’t Gonna Need It (YAGNI)

Overengineering

Many programmers suffer from one well-known problem — the desire to implement all the necessary (and sometimes unnecessary) functionality at once from the very beginning of the project. That is, when the developer adds all possible methods to the class from the beginning and implements them, and may even never use them in the future.

According to this principle, you need to implement only the basic things first, and later, if necessary, expand the functionality. This way you will save effort, time, and nerves on debugging code which may not really be needed.

Avoid Premature Optimization

"Premature optimization is the root of all evil (or at least most of it) in programming" — Donald Knuth

Optimization is a very correct and necessary process to refactor the application, speed it up, as well as to reduce the consumption of system resources. But everything has its own time. If you carry out optimization at the early stages of development it may do you more harm than good. First of all, this is related to the fact that the development of the optimized code requires more time and effort. Besides, you often have to constantly verify the correctness of the whole system using any kinds of regression tests.

That's why it is better to use a simple, but not the most optimal approach at first. And later on, you may decide to switch to a faster or less resource-intensive algorithm when you estimate to what extent this approach slows down the application. Moreover, while you are initially implementing the most optimal algorithm, the requirements may change and the code will be trashed. So you should not waste your time on premature optimization.

Principle Of Least Astonishment

This principle means that your code should be intuitive and obvious, and not surprise another developer when reviewing the code. Also, you should try to avoid side effects and document them if they cannot be avoided.

For example, if a method is called making_cookies but produces Potato objects, such code might surprise anyone and is clearly bad.

S.O.L.I.D.

SOLID” is actually a group of object-oriented design principles. Each letter in the acronym “SOLID” represents one of the principles, which are:

  • Single responsibility states that every module or class should have responsibility for a single part of the functionality provided by the software and that responsibility should be entirely encapsulated by the class;
  • Open-closed states that software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification;
  • Liskov substitution states that the inherited class should complement, not replace, the behavior of the base class;
  • Interface segregation states that no client of the class should be forced to depend on methods it does not use;
  • Dependency inversion says that programmers should work at the interface level and not at the implementation level.

When applied together, these principles help a developer create code that is easy to maintain and extend over time.

Law of Demeter

The basic idea here is to divide the areas of responsibility between classes and encapsulate the logic within a class, method, or structure. Several recommendations can be distinguished from this principle:

  1. Decoupling. You should try to reduce the number of connections between different classes or entities.
  2. Cohesion. The associated classes must be in one module/package/directory.

Following those principles, the application becomes more flexible.

Conclusion

Fellow developers let’s be engineers! Let’s think about design and build robust and well-architectured systems, rather than growing organic monsters. Listed principles are highly correlated and connected in their essence. Of course, I didn't create them, but a small reminder does not hurt.

Recommended books



Buy me a coffee

More? Well, there you go:

Ode to Unit Tests

Rumbling about Test Driven Development

What is the definition of a good software engineer?