Developers are often told not to focus on the performance of applications before it is required (to prevent premature optimization). And that a productive language (low effort from the developers to produce the code) is more important than a language that produces fast code because the time spent by the developer writing the code is so expensive. The idea is that when the code runs fast enough in production to do its job in time, you are good. You can stick with a language that may have slow execution speed, but good productivity.
This is of course often very true, but most of the time also not the complete truth. A lot of time and money can ‘leak’ away by accepting slow-performing code. Hence why we explain the hidden cost of slow application code.
Next to producing code, the other activity that takes a lot of time for the developers is testing. Testing in the form of running the application right after an alteration to the code, and testing in the CI pipeline.
The former case is probably very recognizable: You edit your code, you hit the ‘Run’ shortcut keys, you wait a little while for the application to spin up, and then go on to see if the application does what you intended it to do.
Now, how long does it take, from the moment that you hit that ‘Run’ shortcut combination, to the moment that you see if the application behaves as intended? That amount of time is often ‘forgotten’ in the reasoning for choosing a language/runtime. This time includes both compilation and the time for the application to go through its paces. So the performance of the compiler may impact that duration, and/or the performance of the application code itself.
The latter case may be a bit more disguised. The CI pipeline is supposed to just run in the background on a server, without intervening with the developer’s work. In reality, however, it often happens that a developer is waiting to see what the new code does in all the integration tests that run in the CI pipeline. The developer may be working on something else, but is still mentally occupied with what the outcome of the tests may be.
The statement ‘the performance of an application in production is not very important if it just gets the job done fast enough’ used to be quite true when all code ran on company-owned or other dedicated hardware. In these days of cloud computing, it is pretty common that the price of running the software in production is heavily dependent on resource usage in the form of CPU cycles and/or memory usage. This means that inefficient/slow software may cost many times that of efficient/fast software that performs the same task in that same setting. With the advent of serverless technology, this is only becoming more and more a factor in the TCO (Total Cost of Ownership) of the application.
Let’s look at solutions and countermeasures for the areas of interest.
On this aspect, some of the languages that are slow at run-time may have an advantage. For example scripting languages using an interpreter avoid waiting for the compiler to finish. In the realm of compiled languages, there is also a focus on compiler performance by the designers of a number of the newer languages like Go and the more obscure Vlang.
Paying attention to the compiler performance of a specific language can help a lot in the experience of the developers and in the total time of the development cycle.
The run-time performance (or cost) of the application in production, is determined mostly by 2 factors:
- Algorithmic efficiency of the code. This is dependent on the proficiency of the developers and the time given to develop the application. Optimizing code can cost a lot of time.
- Performance of the compiled language and runtime. This is determined by the choice of language for the project. This may of course be influenced by existing code, knowledge of the developers or other factors.
Choosing the right balance here is key, and knowing that the runtime performance has an influence on in-production cost and development cost helps a lot in making the right choice.
The performance in testing (the time it takes to run the tests) is actually determined by a lot of factors, but let’s look at the most important ones:
- Run-time performance – Described in the previous section, and has an important impact also when running your tests.
- Test-data set – Wisely choosing the test set can have huge impact on the duration of the test. Carefully construct the set to include all and only what you want to test.
- Test environment – Choose the proper technologies to run the tests in the environment they require. Try to avoid spinning up VM’s for each test and look into the use of containers that are much more light weight and cost a fraction of the time to start.
- Mock-objects – In the case of unit-tests, choosing to use mock-objects instead of e.g. a real database or filesystem is good practice, but can also make a big difference in the test performance.
- The test logic – How and what you test should also be carefully considered get the maximum out of your testing investment.
You can gain a lot from making the right choices at the start of a software project in regards to programming language, program algorithms, and test suite setup. The use of cloud computing can play a major factor because it changes the cost impact that bad-performing code has. The development cycle can also be affected in a number of ways by the performance of the code (both application and compiler).