How to reduce image processor memory usage via serialization


So resizing large (many pixels) images takes much memory because the image must be expanded into memory (or swap or disk or whatever) as a bitmap for manipulation.

ZP's on the fly caching (and the precacher/Cachemanger, which just calls the on-the-fly cacher) does this resizing as parallel httpd processes via i.php and the number of parallel processes is only limited by server settings (apache for instance). This can result in very large memory usage and even on a quad core server, it's hard to imagine that handling more than about 8 (probably far less) in parallel is beneficial. At some point the process are competing for memory and cpu and no gain is achieved.

For me, I run out of memory entirely pretty soon and since I don't believe in swap, I get oom-killer (process killer), and it's a pain to make oom killer behave nicely. (several processes competing simultaneously for swapped memory would be nuts.. swap is for parking your web browser while you use your spreadsheet, not for parallel processing)

In contrast an antiquated single core server can handle serving the JPEGS at pretty high bandwidth (we're talking relative to SOHO here, not relative to google). Normal image serving only requires enough memory for the JPG and the process overhead. So once images are created, the resource requirement is much lower and images can be easily served in parallel. The same server can also get the images cached, just not all at a once.

Existing Solutions:

1) Use smaller images.
I don't want to, and it's not necessary

2) The Imagick memory limit doesn't work for me. I even tried modifying the code to apply limits to the all three imagick limits: "MEMORY" "MAP" "DISK"

I think it limits each process individually. I seriously doubt there's any inner-process accounting of resources involved in that option to imagick. Once you give enough freedom that any process can actually work, then they all work and you're back to square 1. Actually I gave it quite alot of room and never got any of them to work, but I didn't really see how far I could push that (because I don't like crashing my server and I already achieved high parallel memory usage without getting results). Anyway, it didn't work for me.

3) limit apache to a couple of processes.
As many have pointed out, web browsing is and should be very parallel. There's no need to limit the whole browsing experience, (in zp and my other pages) just because image caching is enormously resoruce intensive. It only happens once. After the image is cached, that's that. To me, this is not a reasonable option. The problem is parallel image caching. Serial web browsing is not a solution to parallel image caching problems. Serial image caching is a solution to parallel image caching problems.

My solution (so far):

So obviously then my solution is to serialize the image cacher.
The usual all purpose way to serialize something (if you can't just use exec instead of fork) is to use a blocking lock (mutex). So that's all I did.

So far I haven't done this beautifully. I just put this:

set_time_limit ( 1000 );
// make sure the file to lock is there
$fp = fopen("lockfile","w+");
// get a file lock
if ($fp = fopen("lockfile", "r+")){
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
// the rest of i.php goes here

} else {
//echo "i.php could not get a file lock";
} else {
echo "i.php could not get a file lock";
flock($fp, LOCK_UN);
This isn't very nice or clever. At least I could invert the conditional to put all the code at the top and un-nest it a bit, but whatever. This was just a quick and dirty round one
and it works GREAT!
Now that my cache processes are not piling up they work fast and I can use ZP

I'm not sure the timeout change is needed. I threw it in before even testing. That's a long one too and it can also be handled globally in php.ini. As for browser timeouts, in the end it takes as long as it takes to do this, and it takes less time in series than it does when it's over-parallelized. For my purpose, I have not experienced any much timeout problems. I did need to hit refresh once when one last thumb didn't show up. That has not been reprodicuble though. Generally it has just worked and worked smoothly.

For the most part, I will use it with the pre-cacher anyway and that is working fine too.


1) Improve the lock location
-- this only needs to go around the cacheImage function, not around the whole image processor. That can improve performance in some circumstances I think.

2) Change the lock mechanism (probably put it in a class or functions to generalize it, and then play with how to do it in those functions)

the bad: There seem to be some caveats about flock on various system working differently with different open modes. I think it's possible to avoid those problems, but some still feel it's not the most platform independent solution. It doesn't work on some old file systems, maybe not of NFS filesystems (I'm not sure).
the good: lock cleanup is automatic. No need to check process pid written to a file and clean stale locks (and pid's are not unique anyway, collisions are actually quite common)
While there may be issues, it's mostly portable.

mysql locks:
My better idea is to use mysql locks. Let mysql worry about portability of mutexes (and I guess they do). ZP uses mysql anyway, so use a mysql table lock on a dummy table. They also have automatic cleanup when a session is ended. The bad is that I don't have any experience with controlling mysql sessions in php but it looks not very bad.

3) Add a menu option. I think that's easy

4) Implement a configurable level of parallelization.
this requires looping through n non-blocking locks with a small wait time at the top of the loop.. and looping until one of those n locks suceeds. Unfortunately flocks non-blocking option has mixed reviews in windows. The good thing is the behavior would automatically revert to single process serialization, not break entirely, but maybe better not use flock then.

Even on my system I could benefit from 2 or three process running at once.

5) Add option to disable resizing when it's not being called by the precacher.

I'm on the fence on this option now since I was able to enforce serialization without it. If the DOS bug
has really been solved well, then I probably don't care.

However if it hasn't (I'll see soon). Then I have no real need for on-the-fly precaching once I've cached everything anyway, and it represents a real security threat. Once you're controlling use of the resizer anyway, it's easy to put in a conditional to ask if it's being requested by the CacheManager or not (by passing a related parameter, like the admin parameter). Once you've precached everything, there (for my use at least) should be no valid reason why anyone is requesting a resize anyway. I have also already tested that returning without resizing does not cause massive breakage. At worst, if the file really isn't precached, you just get a broken image icon exactly as you expect if you select such an option and don't precache properly.

6) yes there was a return statement in a code path in the middle of the file and I didn't put lock cleanup there. Anyway.. it's irrelevant for a couple of obvious reasons now.

That's all for this isntallment.


  • Do you know for certain that the lock will be released when the script terminates? see


    The automatic unlocking when the file's resource handle is closed was removed. Unlocking now always has to be done manually

    If the lock might be left, you will need some serious exception handling. Note also that it is not possible in PHP to trap all errors.
  • I've tested that with kill -9 on command line php calls to a simplified version of this (one which replaced i.php with a sleep 30). Yes, the lock cleaned up and the next attempt grabbed the lock without trouble.

    The test is in linux. the manual claims that yes, apache will clean it up. If apache gets killed -9 then I don't know. That might be reboot time. Like I said though, using a mysql lock is probably better.

    I'm using php 5.3.13. I also saw that documentation. HOWEVER that seems to just mean that if the function closes or the handle is deleted that there's no automatic cleanup for that lock at that time. Elsewhere I found several mentions that apache itself will clean it up if the process dies and that's why I tested it. Here that's good enough. We're not using the lock in some persistent application.
  • anyway, reliable mutexes is not a novel problem. Portability and cleanup are always the issues obviously, but that's why to use a well developed solution like mysql that probably comes with all kinds of automatic pre-compile options to handle portability.

    None the less.. flock seems pretty darn good too.

    And hey.. I'm all for critique and making things as good as possible, but it's not like there aren't other options in ZP that have some caveats. I don't see why this needs to be held to a higer standard. For use case like mine, none of this can do any worse than help, and it is helping, quite nicely actually.

    edit: and the unlock/close should go inside the conditional really, but whatever.. alpha-0.0 version and still works well.
  • I've written helper functions for MySql locks (for blocking, non-blocking, and multi-locks). They use the ZP connect and query functions. I'll post them as soon as I get a minute to test a minor bug workaround (bug in MySql, big bug, minor workaround) in the non-blocking lock, which is also needed for the multi-lock. The blocking lock is working great.
  • All points on the TODO above are implemented and working perfectly for me. After constructing mysql "multi-locks" I can select an option in my image tab for how many image resizes to allow at once, and that's what I get.

    Since the locks are MySQL based, they fit within the present official system requirements for ZP. They clear when an associated MySQL connection is terminated (one connection required and established per lock) or when the php process terminates, so the worst possible hangs are limited by php timeout control. With a setting >1 then 1 hung resize process can't stop the cacher anyway.

    I'm happy with mine set at 2 processes, but I'd bet there are many for whom a setting lower than their apache process max, will result in speed improvements.

    Given that the present option for reducing memory use doesn't even work, I'd say this is pretty robust.
  • Where can I set an initial default value for an admin option?
    I've grepped around and can't find how this is done.

    My options work fine and my null behavior is well deffined too, but I'd like to have the database filled with a default value, on installation or whatever.

  • I found it. Thanks for the help.
  • So after nothing but criticism and telling me he wouldn't touch it and do it myself, sbillard works on this behind my back without bothering to tell me, so I could avoid wasting my time.

    Then his version is stubbornly broken because he doesn't want to admit that I did it better.

    He assigns jobs to parallel queue slots by random number.

    For 4 jobs and 10 slots allowed by the admin option, there is a 50% chance (49.6 exactly) that only three slots will actually get used and the page takes twice as long to complete. Even for much higher numbers it doesn't "average out" that quickly either, and even gets much worse in between reaching essentially 100% chance of taking twice as long for 10 jobs and 10 slots and enough resources (there's almost zero chance 10 jobs will get uniformly placed in 10 slots by random selection). This particular scenario though is called the birthday problem and is a famously non-intuitive statistical result.

    My parallelization assigned jobs to queues systematically and uniformly (and far more efficiently than the method I described above) and will almost always finish faster, very often much faster, but mine was rejected as was inclusion of my fix for this specific issue or even acknowledgment that there is a problem. But then the original issue was never acknowledged either, so why should I be surprised. Actually it was acknowledged by some here, just not by him.

    If you're going to duplicate someone's ongoing effort that they invented without telling them after telling them to do it themselves and that it can't/shouldn't be done, at least get right.

    This basic premise that everyone was hesitant towards, has allowed me to serve full size images on a VM with a single core host built in 2003 and 700MB of ram in the VM at deliverable bandwidths many multiples of times faster than most people's internet connections.. while doing other (memory intensive) tasks. The improvement is now undisputed.
  • And he asked me how I know the random way is slow, I gave him the above explanations and this was the reply:

    "If you persist in thinking this should not be done then I will remove the serialization code entirely"

    "thinking" ... I'm not supposed to think it?

    Now maybe he was just respecting my opinion. To be fair I'd already given him a bunch of heat for giving me the run-around so he was probably upset. oh well.
  • acrylian Administrator
    This discussion should actually happen on the issue tickets and not here on the forum. Also you should talk with sbillard not "about" him here. This unnecessarily leads to issues we neither want nor need here (as we all know happens easily with written text). Thanks.

    I did not follow everything nor do I have the time to look into those code. Sadly this is above my own knowledge so I cannot tell who is wrong or right. I think he is trying out some things currently an felt not everything fits our environment. We briefly talked about that yesterday via IM at least. As said the issue tickets are the better place for technical discussions like these.
  • There is no discussion on the issue tickets. He doesn't discuss. He didn't discuss that he was going to do this after telling me to do it.

    The guy treated me like dirt and I'm calling him out. I wouldn't be that upset if he rejected my work or made changes if he hadn't gone behind my back first.

    But back to the issue and re-summarize.

    The point is of this little project is to limit the number of processes resizing images to a selectable number, lets say 10.

    We do this by providing 10 mutexes(lokcs) and all processes must somehow wait until one of those 10 mutexes is free. This was my original premise.

    I put the first process on mutex 1, the next on 2, 3 etc until 10 and back to 1 again and keep going so all the locks have the same number processes, or as close as numerically possible, waiting in line. My first version (before publishing) did better by letting each one just grab the first free slot it could find, but that took polling and more resources.

    This does require IPC because something must keep track of the integer. I use the database (atomically also), but it could be a file or whatever.

    The currently implemented approach is to instead just select a random number and send the process to that slot.

    If you have 10 slots and select 4 random numbers the first one has a 10/10 chance of getting a free slot. The next one 9/10, then 8/10 and 7/10. Multiplying give 504/1000 that all 4 get free slots.

    That's 496/1000 chance that one slot generates two process waiting on the same lock, so one waits needlessly when there are 6 other locks free. An observer user will surely wonder why only 3 jobs are running when he selected 10x concurrency. He will say, why is the 4th job waiting?

    In the 10 job 10 slot case there's a 9996 out of 10,000 chance you'll have an unused slot.

    As the numbers get bigger the statistics is more complicated.. I'll spare the math this time, but you still lose by a bunch until you really have an enormous number of process and even then you lose some, just less in relative terms.

    FYI in the birthday problem it takes 23 people before theres a 50% chance they have the same birthday and something like 50 for 99% Of course that's 365 slots.
  • acrylian Administrator
    That's why I said "discussion should happen on the issue tickets".. Sbillard can be a bit rough sounding at times but that he even experiments with your submissions is a good sign.

    Well, I cannot not speak for him or why he did what as I am not familiar with those code parts. But please mind your tone anyway even if you feel treated wrong...

    Thanks for the explaintion, I did get the actual problem discussed. The image processor stuff is sbillard's resort (and the imagick part of that kagutsuchi's).
  • Well you seem reasonable, always have. Nobody likes complainers, I'm sorry, but sometimes there are complaints. Unfortunately the most effective people are often the hardest to deal with. I probably fit the hard to deal with category just a tiny smidge ;), but others can judge if I'm effective at anything. sbillard clearly is, but everyone is incorrect sometimes. I'll hope the IM discussions and such sort this a little. I don't think sbillard will talk to me about this with an open mind now.
  • and at least I learned some php and 2 of its mysql interfaces. Maybe someday if I still remember it that will come in handy.
  • acrylian Administrator
    We really do appreciate any complain or critique or other input and always have. Of course no one needs to say "yes" to everything we do.

    We several times changed our mind about things to do later on. ZP is quite complex naturally grown over several years so maybe sbillards sees possible conflicts with the changes in areas we don't see. Sometimes it takes some time to check things out. He is clearly more familiar with that core stuff than we both.
  • For arguments sake I'll say maybe you're right regarding the mysql stuff. But regarding algorithms and statistics, I'm not ready to concede that point at all. The algorithm I'm talking about is not tied to any particular tools, it's just a different way of selecting a which lock file to use. Let's just say I'm pretty sure I understand random numbers well enough, but anyone here who wants to probably does to.

    There's no code structure dispute involved in that.
  • This is the amount of code we're talking about:


    function __construct($lock='zP',$concurrent=NULL) {

    if ($concurrent) {

    $lock .='_'.rand(1, $concurrent);


    $this->lock = $lock;



    my version is of course a tiny bit longer.
  • acrylian Administrator
    I cannot really say anything as I am not familiar with those parts at all so I better shut my mouth on that..:-)
  • The only line that matters is that the specific mutex assigned is chosen randomly from 1 to n where n is the number of allowed processes..
    No way 2 process out of ten won't end up sitting on the same mutex if n is 10. (well the first one runs, it doesn't sit). Code it any way you want, but a random number doesn't cut it.

    but I understand your position.
Sign In or Register to comment.