Redis evicting almost all keys from cache in LRUASP.NET cache trimming aware of app pool memory...

Can you benefit from a Hammer of Thunderbolts’s Strength increase by holding it with something else than your hands?

What's the purpose of these copper coils with resistors inside them in A Yamaha RX-V396RDS amplifier?

What's the difference between a cart and a wagon?

Where is the fallacy here?

As a new poet, where can I find help from a professional to judge my work?

Should I choose Itemized or Standard deduction?

If a druid in Wild Shape swallows a creature whole, then turns back to her normal form, what happens?

Did Amazon pay $0 in taxes last year?

Called into a meeting and told we are being made redundant (laid off) and "not to share outside". Can I tell my partner?

How would we write a misogynistic character without offending people?

What is the wife of a henpecked husband called?

Is divide-by-zero a security vulnerability?

Why does the 31P{1H} NMR spectrum of cis-[Mo(CO)2(dppe)2] show two signals?

CBP Reminds Travelers to Allow 72 Hours for ESTA. Why?

Sometimes a banana is just a banana

Pronunciation of powers

Easy code troubleshooting in wordpress

Why is working on the same position for more than 15 years not a red flag?

What is the difference between throw e and throw new Exception(e)?

Non-Italian European mafias in USA?

Find the next monthly expiration date

Multiplication via squaring and addition

Did 5.25" floppies undergo a change in magnetic coating?

Why zero tolerance on nudity in space?



Redis evicting almost all keys from cache in LRU


ASP.NET cache trimming aware of app pool memory limit?Troubleshooting a Redis StallAvoiding swap on ElastiCache RedisHow can I monitor the backlog on a Redis pubsub subscription?How to handle Redis raising memory fragmentation?Redis appendonly.aof file size out of controlAre thousands of expired keys bad on performance for Redis?AWS: identifying Access keys usage originRedis taking too much memoryWhat is the latency WITHIN a data center? I ask this assuming there are orders of magnitude of difference













0















We have an application that makes use of a Redis AWS Elasticache instance for speeding up data access by the application. In it, we are storing data that is frequently used by this application. Since we don’t expect all data to fit in available Redis memory, we are attempting to use LRU technique in order to keep the most relevant data in cache, while evicting data that is less relevant when we reach a max memory event. We are using allkeys-lru for maxmemory-policy (and default value of 3 for maxmemory-samples), and none of the keys are set with a TTL.



We are observing some odd behavior with the LRU evictions. While keys are being evicted in max memory events, the algorithm is very aggressive and evicts almost all of the data in memory. While the eviction begins on a steady pace, it starts ramping up very quickly (much faster than the rate of SET operations) and leaves the memory with only a small fraction of the original items (see images with Elsaticache graphs below).



Eviction rate ramp up



Evicted items



After this "eviction event" ends, evictions come to a complete stop. The cycle then repeats itself once the server hits max memory again - on average once every 24 hours. The problem with this is that while this "mass eviction" event is underway, Redis performance is severely compromised and response times suffer significantly. Plus, it has no relevant data to offer until the caches rebuilds. Another problem with these eviction events is that all of the replicas become completely unresponsive to queries while they are underway - effectively preventing their use.



So my questions are:




  • Why do we see this ramp-up in eviction rate once evictions start?

  • Is there a way to configure Redis so that the eviction rate is
    comparable with the set rate, so that the flow of keys coming in and
    out are equal?

  • Why do the replicas became unresponsive when eviction
    events occur?


The Redis version being used is 2.8.24. Any help or insight is highly appreciated.










share|improve this question







New contributor




santista is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

























    0















    We have an application that makes use of a Redis AWS Elasticache instance for speeding up data access by the application. In it, we are storing data that is frequently used by this application. Since we don’t expect all data to fit in available Redis memory, we are attempting to use LRU technique in order to keep the most relevant data in cache, while evicting data that is less relevant when we reach a max memory event. We are using allkeys-lru for maxmemory-policy (and default value of 3 for maxmemory-samples), and none of the keys are set with a TTL.



    We are observing some odd behavior with the LRU evictions. While keys are being evicted in max memory events, the algorithm is very aggressive and evicts almost all of the data in memory. While the eviction begins on a steady pace, it starts ramping up very quickly (much faster than the rate of SET operations) and leaves the memory with only a small fraction of the original items (see images with Elsaticache graphs below).



    Eviction rate ramp up



    Evicted items



    After this "eviction event" ends, evictions come to a complete stop. The cycle then repeats itself once the server hits max memory again - on average once every 24 hours. The problem with this is that while this "mass eviction" event is underway, Redis performance is severely compromised and response times suffer significantly. Plus, it has no relevant data to offer until the caches rebuilds. Another problem with these eviction events is that all of the replicas become completely unresponsive to queries while they are underway - effectively preventing their use.



    So my questions are:




    • Why do we see this ramp-up in eviction rate once evictions start?

    • Is there a way to configure Redis so that the eviction rate is
      comparable with the set rate, so that the flow of keys coming in and
      out are equal?

    • Why do the replicas became unresponsive when eviction
      events occur?


    The Redis version being used is 2.8.24. Any help or insight is highly appreciated.










    share|improve this question







    New contributor




    santista is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.























      0












      0








      0








      We have an application that makes use of a Redis AWS Elasticache instance for speeding up data access by the application. In it, we are storing data that is frequently used by this application. Since we don’t expect all data to fit in available Redis memory, we are attempting to use LRU technique in order to keep the most relevant data in cache, while evicting data that is less relevant when we reach a max memory event. We are using allkeys-lru for maxmemory-policy (and default value of 3 for maxmemory-samples), and none of the keys are set with a TTL.



      We are observing some odd behavior with the LRU evictions. While keys are being evicted in max memory events, the algorithm is very aggressive and evicts almost all of the data in memory. While the eviction begins on a steady pace, it starts ramping up very quickly (much faster than the rate of SET operations) and leaves the memory with only a small fraction of the original items (see images with Elsaticache graphs below).



      Eviction rate ramp up



      Evicted items



      After this "eviction event" ends, evictions come to a complete stop. The cycle then repeats itself once the server hits max memory again - on average once every 24 hours. The problem with this is that while this "mass eviction" event is underway, Redis performance is severely compromised and response times suffer significantly. Plus, it has no relevant data to offer until the caches rebuilds. Another problem with these eviction events is that all of the replicas become completely unresponsive to queries while they are underway - effectively preventing their use.



      So my questions are:




      • Why do we see this ramp-up in eviction rate once evictions start?

      • Is there a way to configure Redis so that the eviction rate is
        comparable with the set rate, so that the flow of keys coming in and
        out are equal?

      • Why do the replicas became unresponsive when eviction
        events occur?


      The Redis version being used is 2.8.24. Any help or insight is highly appreciated.










      share|improve this question







      New contributor




      santista is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.












      We have an application that makes use of a Redis AWS Elasticache instance for speeding up data access by the application. In it, we are storing data that is frequently used by this application. Since we don’t expect all data to fit in available Redis memory, we are attempting to use LRU technique in order to keep the most relevant data in cache, while evicting data that is less relevant when we reach a max memory event. We are using allkeys-lru for maxmemory-policy (and default value of 3 for maxmemory-samples), and none of the keys are set with a TTL.



      We are observing some odd behavior with the LRU evictions. While keys are being evicted in max memory events, the algorithm is very aggressive and evicts almost all of the data in memory. While the eviction begins on a steady pace, it starts ramping up very quickly (much faster than the rate of SET operations) and leaves the memory with only a small fraction of the original items (see images with Elsaticache graphs below).



      Eviction rate ramp up



      Evicted items



      After this "eviction event" ends, evictions come to a complete stop. The cycle then repeats itself once the server hits max memory again - on average once every 24 hours. The problem with this is that while this "mass eviction" event is underway, Redis performance is severely compromised and response times suffer significantly. Plus, it has no relevant data to offer until the caches rebuilds. Another problem with these eviction events is that all of the replicas become completely unresponsive to queries while they are underway - effectively preventing their use.



      So my questions are:




      • Why do we see this ramp-up in eviction rate once evictions start?

      • Is there a way to configure Redis so that the eviction rate is
        comparable with the set rate, so that the flow of keys coming in and
        out are equal?

      • Why do the replicas became unresponsive when eviction
        events occur?


      The Redis version being used is 2.8.24. Any help or insight is highly appreciated.







      amazon-web-services cache redis






      share|improve this question







      New contributor




      santista is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question







      New contributor




      santista is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question






      New contributor




      santista is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 6 hours ago









      santistasantista

      1




      1




      New contributor




      santista is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      santista is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      santista is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          0






          active

          oldest

          votes











          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "2"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });






          santista is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f956738%2fredis-evicting-almost-all-keys-from-cache-in-lru%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          santista is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          santista is a new contributor. Be nice, and check out our Code of Conduct.













          santista is a new contributor. Be nice, and check out our Code of Conduct.












          santista is a new contributor. Be nice, and check out our Code of Conduct.
















          Thanks for contributing an answer to Server Fault!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f956738%2fredis-evicting-almost-all-keys-from-cache-in-lru%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          As a Security Precaution, the user account has been locked The Next CEO of Stack OverflowMS...

          Список ссавців Італії Природоохоронні статуси | Список |...

          Українські прізвища Зміст Історичні відомості |...