Ticket UUID: | 678eb0c20a07f601aa2b11439b7ac49ecaf21e5c | ||
Title: | Large files make memory usage spike through the roof. | ||
Status: | Closed | Type: | Incident |
Severity: | Severe | Priority: | |
Subsystem: | Resolution: | Not_A_Bug | |
Last Modified: | 2009-09-17 00:46:35 | ||
Version Found In: | 6cf7548a3c | ||
Description & Comments: | |||
I'm hosting a few repositories for small projects on a tiny (64MB RAM) VPS. With all services running (lighttpd and sshd) this leaves me with 44MB available RAM. This has not caused a problem until today.
Today I sent up a fossil repository that has two sizable binary files: one 18MB, one 14MB. I found to my surprise that I simply could not clone the repository when testing it. I got cryptic error messages like these: $ fossil clone http://retrotech.boldlygoingnowhere.org/repos/RSTS rsts.fsl Bytes Cards Artifacts Deltas Send: 552 22 0 0 Received: 4852 29 0 0 Send: 472 9 0 0 1fossil: unknown command: Status: Investigation by trying a local clone on the VPS showed me that Fossil was running out memory: $ fossil clone http://retrotech.boldlygoingnowhere.org/repos/RSTS rsts.fsl Bytes Cards Artifacts Deltas Send: 552 22 0 0 Received: 4149 25 0 0 Send: 472 9 0 0 out of memory Instrumenting the memory profile on the remote side while running a clone from the server confirmed that yes, indeed, the system is rapidly running out of RAM just as Fossil is supposed to be sending data down the pipe. It looks to me like it's pulling the whole repository into memory (weighing in at 12MB on disk) and then doing other stuff besides (decompressing in-situ, perhaps?) in order to clone. drh added on 2009-09-16 16:20:55: But it does construct an entire HTTP reply message in memory. That involves reading in the 18MB blob out of the database and decompressing it, then appending that blob to the end of the growing HTTP reply (there is 36MB for you already) then compressing the entire HTTP reply using Z-lib. As the HTTP reply buffer grows, it uses the trick of realloc-and-copy so growing a buffer might make require twice the size of the buffer. Every compress/uncompress operation involves keeping both the original and the modified copy in RAM at least for a short while. Add to that malloc fragmentation and the memory space that SQLite will request (we configure SQLite in fossil to use lots of memory because that makes it faster and because fossil is designed to run on a workstation, not a cellphone!) and it is little wonder that you are running out of RAM. The obvious solution here is to get more RAM. That can't be hard. I use Linode as my VPS provider and the smallest system they sell is 360MB. I also use fossil on Hurricane Electric. HE is a shared hosting provider, not a VPS provider, but shared hosting is sufficient to support fossil and you don't have arbitrary RAM limits. Could fossil be reengineered to support 18MB binaries on a 64MB server? Probably. But why would you want to? RAM is cheap and getting cheaper daily. Fossil does not use that much memory relative to the size of the files it stores - perhaps a multiplier of 3 or 4, but that is not so much in the grand scheme of things. This problem has a very simple solution: throw memory at it and move on. anonymous added on 2009-09-17 00:46:35: |