Optimization is the next natural step of observing your server's performance. You might want to squeeze out every extra millisecond of performance, but in general, people will address optimization when there are crass performance hits on their applications.
Without knowing it, you have probably explored performance-enhancing options already. For example, caches and load-balancing clusters, as well as cloud services, help accommodate growth, and so on. But this recipe will give you general ideas on what the common performance pitfalls are within the domain of a single server (physical or virtual), and what the low-hanging fruit for you to improve is.
Set up application profiling for your programming language to find bottlenecks and improve your logic. For example, Xdebug is very popular in the PHP community and can help to address some of the scenarios rapidly.
Install it with
sudo apt-get install php5-xdebug.
Enable it for Apache with sudo editor
/etc/php5/apache2/php.ini, browse all the way to the end, and add:
[xdebug] xdebug.profiler_enable = 1
Now restart Apache with
sudo service apache2 restart.
Xdebug will drop cachegrind files in
/tmp(you can change this in
php.iniif needed), and you can inspect those cachegrind files with a tool such as KCachegrind, which will show you the time spent on the functions as shown in the following screenshot:
Act on your slow queries. This might mean creating indices, reviewing your data model, or ORM facilities, and even working with DBAs and developers on changing queries altogether. The next good step is to run them directly on the database console using
EXPLAINcan tell you if a query is not using the most efficient way the database provides to do something; you can improve in some cases by making indices in the case of
SELECTor provide cues for improving queries in other cases. For a simple query on a table with an index, the output looks like:
This explains the behavior of a query that is executed immediately after the server boots up, and repeated just afterwards, dropping from 0.29 seconds to execute to something much faster, as shown in the next screenshot, side by side:
In this second query, Using temporary and Using filesort are not a problem when dealing with a 100-row table, but this one has 300,584, so it takes 0.23 seconds to complete.
EXPLAINhelped to identify the problem as indicated in the following screenshot:
Now, by creating an index we can help the time drop (see the Using index clause in the following screenshot):
Fine tuning (or tweaking) configuration parameters for cache sizes, flush behavior, and so on might also be an option.
5 to 10 years ago, optimizing the kernel was a big thing. People were upgrading and looking for new features to improve performance—much improvement came to laptops and desktops, and few to servers. Nowadays, lots of mid-market sysadmins prefer to leave the kernel as is in favor of eased management, while top web companies may employ teams that only do kernel optimization. Some vendors might even restrict the amount of kernel customization they can tolerate for support or warranty purposes.
Yes, faster I/O can add performance value to your server. More bandwidth, faster bus speeds, better RAM technologies, faster disks, storage using fibre, and so on should all be explored. But other solutions, such as horizontally growing by adding more servers for load balancing can also help. It is important to find the right balance between an elastic growth strategy and a manageable architecture. Fortunately, Debian has several free software tools as well as enough customization hooks for you to explore your own approach.