Viewing 11 posts - 1 through 11 (of 11 total)
  • Author
  • #1222974

    Hi there,

    after facing issues with enabled Object caching (discussed and mostly solved here: ) we just received alerts some days ago, that our DB server was writing 300MB of Binary logs per Minute(!). We do have Object Caching DISABLED at the moment and were still able to reproduce the problems.

    Checking the Enfold forums, there are several unsolved and open threads, mentioning issues with the merged file generation since february and massive binlog production:

    I opened this post to re-raise attention to this topic, as we can confirm that there are still massive issues which are quite hard to detect on a regular website.
    Checking Server analytics we can confirm massive database writes starting at the beginning of march when we updated Enfold from to 4.7.3. These writes continued to grow over time leading to single options being ~30MB – 50MB for a high traffic site. And still >10MB for medium traffic sites.
    Or in other numbers: The options aviaAsset_avia-footer-scripts and aviaAsset_avia-head-scripts contained arrays with around 30.000 to 50.000 entries (‘error-generating-file’). Which were growing with each page / backend call. And that particular high-traffic-site was just live for about 1 week.

    Most people will not really notice these issues, as the DB grows slowly. But the loaded options will grow bigger and bigger and slow down the whole page as they are autoloaded. Leading to outages in the end when the database suddenly uses up all available space. In more advanced scenarios like ours, the binary logs of the database will rapidly fill up the disk.

    We tried some of the suggested solutions in the other threads but ended up with a more radical approach as a workaround which actually disables your error handling with the file generation completely as well as the unique handling:

     * Enfold Error from 4.7.0
     * Enfold is writing in table options endless option_name aviaAsset_avia-[location]-scripts 'error-generating-file' ...
     * Only write the option, if there is no error.
     * Clean up existing mess in the database
    function custom_update_option_aviaAsset_avia_scripts($value, $old_value, $option) {
    	if (is_array($value) && in_array('error-generating-file', $value, true)) {
    		// Cleanup Enfolds massive entries, if existing
    		return array_filter( $value,  function($arr_value) { return $arr_value !== 'error-generating-file'; });
    	return $value;
    add_filter( 'pre_update_option_aviaAsset_avia-head-scripts', 'custom_update_option_aviaAsset_avia_scripts', 10,3 );
    add_filter( 'pre_update_option_aviaAsset_avia-footer-scripts', 'custom_update_option_aviaAsset_avia_scripts', 10,3 );
     * Do not allow Enfold to add a unique id to generated files. It does not work
    function custom_do_not_use_enfold_uniqid_for_generated_files($uniqid, $file_type, $data, $enqueued, $file_group_name, $conditions) {
    	return "";
    add_filter( 'avf_merged_files_unique_id', 'custom_do_not_use_enfold_uniqid_for_generated_files', 10, 6);

    Instantly DB writes and binlog sizes dropped back to normal. This code also cleans up the mess in the options table and reduced DB size by several hundred MB.
    On the disk, still files with the same hash, but different uniques reamains.

    When using the above filters to disable the unique id as well as the storage of error information, all obvious issues are solved. But still, this also means that the error handling you intended is disabled which will lead to other problems we haven’t yet encountered.

    We would suggest an approach based on file content hashes combined with some kind of caching based on the current approach using the plugin and theme names and versions.
    The file content hash will allow you to only generate files when the actual content changes based on a compare to the hashes. In addition, there is no need to bust the cache anymore, as long as the file content remains unchanged, making the random unique not required anymore.

    As hashing the file contents could be a performance bottleneck when used wrong and to frequently on larger files, it should run only when really required. In addition using “crc32” for hashing instead of md5 might also be an improvement to speed up things. As collisions could happen, it shouldn’t be a big issue due to the very low probability. And if they happen, just regenerate the affected files. This should be a total edge case anyway.
    Regarding hashing performance:

    To outline one very particular and easy to implement idea:
    Enhance your current code to not use a random uniqueid, but use the file contents hash as such. In this case you have a first level caching with the hash over names and versions. Level2 cache (and uniqueid) will then be the file contents hash.
    Big advantage: Both parts will be calculated and not randomly generated. This will reduce most of the issue the current implementation has. It will NOT rely on the database for the random unique and it will not tend to generate masses of files for unchanged contents. Because if names, versions and content are identical, then always the same file name will be used for the generated code.

    What still needs some thoughts would be the part on what to store in the database / object cache to reduce file operations to a minimum. In an optimal world, you shouldn’t need to change anything there. After our experiences, you should absolutely drop the idea of saving “error logs” to an autoloaded option.
    Write something to the PHP Error log, or write an option to display a nag to the admin and use the non merged files. This should be fine for most usecases in our oppinion.

    We do hope, that this leads to an even better version of the performance features and hope you can follow up our findings and suggestions. If not, we are happy to share more details with you.

    Have a great and sunny day,



    Hey Jan,

    Thanks for this info.

    I digged into the code of enfold\config-templatebuilder\avia-template-builder\php\asset-manager.class.php.

    I found a bug that caused generating the “error-generating-file” entries spoiling the db.
    And referring to I also added a temporary fix.

    Could you please check, if this solves the problem please.

    Replace the content of enfold\config-templatebuilder\avia-template-builder\php\asset-manager.class.php with

    In functions.php add

    add_theme_support( 'avia_redis_cache_fix' );

    Save theme options to clear db entries.

    I added functions update_option_fix_cache, delete_option_fix_cache and fix_all_options_cache.

    Best regards,

    PS. Please remove the changes you made.

    • This reply was modified 3 years, 10 months ago by Günter.

    Hey Günter,

    thanks for your quick response. It looks good so far on our staging servers with our workarounds disabled. Nice and easy approach by the way!

    Personally, I would vote against the “alloptions” race conditions fix for object caching in Enfold (it’s not just redis) and prefer a clean implementation of the wp_cache API itself.
    Having the alloptions fix within 3rd party code makes it harder to control our environment. We do deploy it ourselves for example and wouldn’t use it out of the Enfold code base.
    But I know that it takes more time to add and test the wp_cache API implementation and assume this is just a temporary workaround :-)

    I will give your fix a test run in production tomorrow morning. Right now is peak traffic, so we cannot do anything experimental.

    Have a great evening and thanks alot,



    Good Morning Günter,

    Fix is applied and runnig smoothly on our production system. We will keep monitoring close, but for now I would say that it solves the issue with the “error-generating-file” entries.

    As written previously: We have Object Caching DISABLED at the moment. So I can only confirm the fix for the database write issues at the moment.

    Best regards,


    Hi Jan,

    Thanks for your feedback.

    I extended theme option Performance -> Unique timestamp of merged files to Unique timestamp of merged files and WP object cache bug.

    As we have only few reports concerning the object cache problem with Enfold I decided to keep implementation the way it is. The risk of hacking WP core and having negative influence on existing sites that work fine now is too big. But I know the probem now and will keep an eye on it.

    I uploaded a beta version (see private content).

    It would be a great help if you could check on your staging site with activated object cache that it is working as expected – only one file is generated and not one on each page load as before.

    Thank you in advance.

    Best regards,


    Can you also send me the beta theme?




    Sent link via email.

    Best regards,


    Hi Günter,

    sorry for the late reply. We were finally able to give it a test run.

    We updated Enfold to 4.7.5 with our fixes disables: As expected, database and filesystem went crazy with the file generation.
    Updating to your Beta solved the issues as expected.

    So I can confirm, that you code fixes this issue.

    Still I couldn’t see any kind of databse cleanup. If there are old error entries for the header or footer scripts, they should be deleted as they could be massive considering the time this code was live.

    Have a great day,


    Hi Jan,

    Thanks for the feedback and testing.

    Did you try to disable file merging (both CSS and JS), check option “Delete old CSS and JS files?” and save theme options?

    This should remove all options starting with “aviaAsset_%”
    see config-templatebuilder\avia-template-builder\php\asset-manager.class.php function reset_db_asset_list().

    Best regards,


    Hi Günter, this should work, if you say so :-)

    But it is not an option for pages with page cache enabled or those who simply update to the new enfold version and didn’t know, that the had the issue. The database entry would only stop to grow but there is no chance to get it “back to normal”.
    That’s why I would have expected some kind of cleanup run. Like the one we implemented by deleting all the error entries from the array.

    But it is just a suggestion. If you know for sure only a small group of people is affected by this, then a note within the Release Notes should be fine.




    Thank you for your feedback. I added a release note to version.txt.

    I will close this topic. In case you need further assistance please open a new thread.
    Have a great day.

    Best regards,

Viewing 11 posts - 1 through 11 (of 11 total)
  • The topic ‘Still massive DB writes issues with merged js and css (w & w/o Object Caching)’ is closed to new replies.