There seems to be a common misconception around some user forums. Maybe you recall this phrase:
RAM is there to be used.
This phrase is true within a particular context. It is used to let the operating system cache as much disk as possible into RAM, as RAM is significantly faster than hard disks.
However, within the context of a user-space application it is actually a fallacy. Some users (and even some developers) don’t know much about the inner workings of their computers, and use this phrase outside of its proper context. What is worse, sometimes they don’t even care much about RAM consumption. They don’t care about memory leaks, or think that if memory leaking occurs only once, it’s fine. Or they think garbage-collected frameworks or languages will take care of everything by magic. Without knowing, they end up using the aforementioned phrase as the equivalent of:
RAM is there to be wasted.
There are some issues that may lead to actually feed this thought, leading them to think that an application actually should put as much information as needed in RAM, and that, sooner or later, “the OS will handle it efficiently”. Furthermore, that not doing this is just a performance sacrifice.
Consider the following (I’ll oversimplifying for the sake of easy explanation, but the model is true). On one hand:
- When an application requests for RAM and the OS allocates it, the memory is being reserved for its exclusive use until the application frees it. An application can not know (and should not know) if another application requires or requests RAM.
- If physical RAM gets topped and there is swap space available in the system, the OS will move out some of the least used RAM pages out to swap space, be it a partition or a file. When the paged-out RAM in swap space is needed, it gets swapped with other least-used physical RAM pages. Both operations require hard disk activity. That’s what makes swapping inherently slow.
- The operating system will always leave some (say, 50 MB) physical RAM unused, so it has RAM available to react to a system instability emergency.
On the other hand:
- Operating systems use free physical memory to cache disk reads and writes, so when a disk sector is read multiple times, subsequent reads are read from RAM, which is much faster.
- The OS will only cache disk reads/writes to free physical RAM, as it would be useless to “cache” disk to disk.
- When an application requests for RAM, the OS will free RAM used by disk cache before allocating it to the requesting application. This operation doesn’t require disk activity if freeing up RAM-cached reads or flushed disk writes; however, it does require disk writes if freeing up unflushed RAM-buffered disk writes. Usually, the OS flushes cached disk writes when system is idle, so you don’t really notice it, and when the time comes, everything is already flushed.
- Linux, report “resident RAM size” for a process as “physical RAM usage”. So if you measure your application RAM requirements, you should do it with your swap partitions disabled. Please enlighten me about how Windows reports RAM for a process.
- Disk writes are usually slower than disk reads.
The actual truth around this fallacy is “sure, the OS handles it, but you kill the disk cache and encourage memory swapping, slowing the whole system down, including your own application“. So yes, the system “takes care” of it, but it is actually recovering from the programmer’s mistake at the cost of overall, unnecessary, system slow down and potential instability.
So let’s analyze two scenarios taken out of my own experience. I repeat: I am oversimplifying. Use this as a model.
Case 1: A browser caches on disk instead of RAM.
You have a system with 2 GB of RAM, out of which you have 1 GB free (as in “maybe used by disk cache but surely available for applications”). You fire up a web browser that caches resources on disk (because it is faster than the Internet) and needs about 300 MB of RAM.
1000 – 300 – 50 = 650
You end up with about 650 MB of RAM for disk cache optimization and 50 MB of actual free physical RAM.
Whenever the browser needs a network resource it tries to load it up from disk. However, disk gets cached on RAM by the OS, so subsequent reads will be read out from RAM. Performance penalty it is barely noticeable.
If the browser wants to cache a newly-visited website, it will save it to disk. The OS will buffer the write to RAM so it will postpone it until the system is idle (when you are reading the website). Performance penalty it is barely noticeable.
Whenever another application reads from disk there is a high probability of hitting the cache because there is 650 MB of RAM available for this purpose. Even if missing the cache, whatever was read from disk will be cached in RAM for subsequent reading.
You, then, fire up a VM that requests 450 MB of RAM. The following occurs:
- The operating system frees up 450 MB of disk cache. Some of it requires writing, some not. This operation is only as slow as the disk-writes required for unflushed writes, so it’s not really that slow. Besides, the user somewhat expects it because he just commanded the PC to load up a VM.
- There is no swapping at all. There is no need.
- It finally allocates 450 MB to the VM.
- The VM writes to that physical RAM allocated for it.
There is still 200 MB available for disk caching, which the OS will try to efficiently use.
Now, the browser wants to load something from “disk cache”. There is some probability, higher than 0 of course, of hitting the cache and the OS serving the data directly from RAM. Let’s assume not: it was read from disk. The browser is still accelerating web surfing, as the local disk is still faster than the Internet. Furthermore, the OS will cache the object on RAM for subsequent access.
Of course, the VM continues running from RAM without ever needing of any kind of swapping or disk trashing. The system is fully responsive. The user knows (or should know) that if it wants to free up RAM it will close the VM or the browser.
Case 2: A browser uses more RAM than it should, for caching.
You have a system with 2 GB of RAM, out of which you have 1 GB free (as in “maybe used by disk cache but surely available for applications”). You fire up a web browser that could work really nice with 300 MB of RAM but instead caches everything up for the sake of speed and ends up using 600 MB of RAM.
1000 – 600 – 50 = 350
You end up with about 350 MB of RAM for disk cache optimization and 50 MB of actual free physical RAM.
Whenever the browser needs a network resource it loads it up from its own RAM allocation, so it is really fast. (It still has to load it first from disk if not previously available on RAM).
Whenever any other application reads from disk there is a lower probability of hitting the cache. In any case, the overall probability of the other applications of hitting the cache is lower, as memory is exclusively allocated for the web browser. This significantly increases the probability of accessing disk, which may lead to slow the whole system down.
Minimizing the browser does not free memory for other applications. Memory is still being allocated for exclusive use by it.
You, then, fire up a VM that requests for 450 MB of RAM. The following occurs:
- The operating system frees up 350 MB of disk cache. Some of it requires writing, some not. This operation is only as slow as the disk-writes required, so it’s not really that slow.
- It determines the least used 100 MB of physical RAM and moves it out to swap space. This operation is slow, as it involves a lot of unavoidable on-the-fly disk writes.
- It finally allocates 450 MB to the VM application.
- The VM writes to that physical RAM allocated for it.
The system has no memory for disk cache anymore. Whenever other applications need to read from disk, they will miss the cache and the OS will have to physically access the disk to serve the request. This is slow. What is worse, the disk access will not be cached for subsequent reading.
Now, the browser wants to load something from its own “RAM cache” (which, by the way, may probably be out in swap space, now). It may happen one of two things:
If the RAM cache resource is on swap space, the OS will need to read it back. Since that data will be now a “more recently used” memory page, the OS might swap RAM and disk with some contents of the VM or another least-used application. This operation is slow because it requires another lot of unavoidable on-the-fly disk writes. Or:
If the resource is still in the application own “RAM cache”, it will be retrieved flash-fast from RAM. However, the VM still needs its own RAM to continue, and so do all other applications. This forces the operating system to swap memory again. Also, there is no RAM available for disk cache, and other applications disk requests will still served continuously from disk without the possibility of RAM caching for subsequent access. This generates constant disk activity, slowing all the system, including the browser. The browser “extra RAM cache” did no good at all.
Of course, the VM continues running and its RAM will continuously be needed back, so disk writing becomes a constant. The whole system slows down by what is called disk trashing. In really bad cases, the system may become unresponsive, restricting the user from even being able to close one of the two applications to recover the system back.
Yes, RAM is there to be used when it is needed, not wasted. RAM is a limited resource. There are ways to use RAM efficiently. For instance, loading the indexes of a mailbox in RAM (but not the whole mailbox content), if done correctly, may significantly speed up mail searching.
However, caching disk to RAM might not be a good idea. The caching is already done by the OS, so it is just discarding an efficient OS function by an application. Sometimes it may be a good idea but most probably, particularly in desktop applications, it is not.