Changes to ticket 678eb0c20a
By drh on 2009-09-16 16:20:55. See also: artifact content, and ticket history
- Appended to comment:
drh added on 2009-09-16 16:20:55:
No, fossil is not pulling the whole repository into memory....But it does construct an entire HTTP reply message in memory. That involves reading in the 18MB blob out of the database and decompressing it, then appending that blob to the end of the growing HTTP reply (there is 36MB for you already) then compressing the entire HTTP reply using Z-lib. As the HTTP reply buffer grows, it uses the trick of realloc-and-copy so growing a buffer might make require twice the size of the buffer. Every compress/uncompress operation involves keeping both the original and the modified copy in RAM at least for a short while. Add to that malloc fragmentation and the memory space that SQLite will request (we configure SQLite in fossil to use lots of memory because that makes it faster and because fossil is designed to run on a workstation, not a cellphone!) and it is little wonder that you are running out of RAM.
The obvious solution here is to get more RAM. That can't be hard. I use Linode as my VPS provider and the smallest system they sell is 360MB. I also use fossil on Hurricane Electric. HE is a shared hosting provider, not a VPS provider, but shared hosting is sufficient to support fossil and you don't have arbitrary RAM limits.
Could fossil be reengineered to support 18MB binaries on a 64MB server? Probably. But why would you want to? RAM is cheap and getting cheaper daily. Fossil does not use that much memory relative to the size of the files it stores - perhaps a multiplier of 3 or 4, but that is not so much in the grand scheme of things. This problem has a very simple solution: throw memory at it and move on.
- Change resolution to "Not_A_Bug"
- Change status to "Closed"