Showing posts with label caching. Show all posts
Showing posts with label caching. Show all posts

Thursday, September 13, 2018

hibernate second level cache with Redis -will it improve performance?

Leave a Comment

I am currently developing application using Spring MVC4 and hibernate 4 .I have implemented hibernate second level cache for performance improvement .If I use Redis which is in-memory data structure store, used as a database, cache etc, performance will increase but will it be a drastic change?

4 Answers

Answers 1

Drastic differences you may expect if you cache what is good to be cached and avoid caching data that should not be cached at all. Like beauty is in the eye of the beholder same is with the performance. Here are several aspects you should have in mind when using hibernate AS second level cache provider:

No Custom serialization - Memory intensive
If you use second level caching you would not be able to use fast serialization frameworks as Kryo and will have to stick to java serializeable which sucks.

On top of this for each entity type you will have a separate region and within each region you will have entry for each key of each entity. In terms of memory efficiency this is inefficient.

Lacks ability to store and distribute rich objects
Most of the modern caches also present computing grid functionality having your objects fragmented into many small pieces decrease your ability ability to execute distributed tasks with guaranteed data co-location. That depends a little bit on the Grid provider, but for many would be limitation.

Sub optimal performance
Depending on how much performance you need and what type of application you are having using hibernate second level cache might be a good or a bad choice. Good in terms that it is plug and play...."kind of..." bad because you will never squeeze the performance you would have gained. Also designing rich models mean more upfront work and more OOP.

Limited querying capabilities ON the Cache itself
That depends on the cache provider , but some of the provider really are not good doing JOINs with Where clause different than the ID. If you try to build and in memory index for a query on Hazelcast for example you will see what I mean.

Answers 2

Yes, if you use Redis, it will improve your performance.

No, it will not be a drastic change. :)

https://memorynotfound.com/spring-redis-application-configuration-example/

http://www.baeldung.com/spring-data-redis-tutorial

the above links will help you to find out the way of integration redis with your project.

Answers 3

Your question was already discussed here. Check this link: Application cache v.s. hibernate second level cache, which to use?

This was the most accepted answer, which I agree with:

It really depends on your application querying model and the traffic demands.

  1. Using Redis/Hazelcast may yield the best performance since there won't be any round-trip to DB anymore, but you end up having a normalized data in DB and denormalized copy in you cache which will put pressure on your cache update policies. So you gain the best performance at the cost of implementing the cache update whenever the persisted data changes.
  2. Using 2nd level cache is easier to setup but it only stores entities by id. There is also a query cache, storing ids returned by a given query. So the 2nd level cache is a two step process that you need to fine tune to get the best performance. When you execute projection queries the 2nd level object cache won't help you, since it only operates on entity load. The main advantage of 2nd level cache is that it's easier to keep it in sync whenever data changes, especially if all your data is persisted by hibernate.

So, if you need ultimate performance and you don't mind implementing your cache update logic that ensures a minimum eventual consistency window, then go with an external cache.

If you only need to cache entities (that usually don't change that frequently) and you mostly access those through Hibernate entity loading, then 2nd level cache can help you.

Hope it helps!

Answers 4

It depends on the movement.

If You have 1000 or more requests per second and You are low on RAM, then Yes, use redis nodes on other machine to take some usage. It will greatly improve your RAM and request speed.

But If it's otherwise then do not use it.

Remember that You can use this approach later when You will see what is the RAM and database Connection Pool usage.

Read More

Thursday, June 28, 2018

Downside of many caches in spring

Leave a Comment

Due to the limitation of not being able to evict entries based on a partial key, I am thinking of a workaround using the cache name as my partial key and evicting all (there would only be one) entries in the cache. For example, let's say there are 2 key-value pairs like so:

"123@name1" -> value1, "124@name2" -> value2

Ideally, at the time of eviction, I would like to remove all keys that contain the string "123". However, as this is not supported, the workaround I'm thinking of is to have the following:

"123" cache: "name1" -> value1

"124" cache: "name2" -> value2

Then at eviction, I would simply specify to remove all keys in "123" cache

The downside of this of course is that there would be a lot of different caches. Is there any performance penalty to this?

From reading this, it seems Redis at least only uses the cache name as a prefix. So it is not creating multiple separate caches underneath it. But I would like to verify my understanding.

I am also looking to use Redis as my underlying cache provider if that helps.

1 Answers

Answers 1

You can use few approaches to overcome this :

  1. Use grouped data structures like sets, sorted sets and hashes : Each one of them supports really high number of member elements. So you can use them to store your cache items,and do the relevant lookups. However, do go through the performance difference ( would be very small ) on this kind of lookup compared to a direct key-value lookup. Once you want to evict a group of cache keys which are of similar type, you just remove that data structure key from redis.

  2. Use redis database numbers : You would need to edit redis.conf to increase maximum number of redis database numbers possible. Redis databases are just numbers which provide namespacing in which your key-values can lie. To group similar items, you would put them in the same database number, and empty the database with a single command whenever you want to flush that group of keys. The caveat here is that, though you would be able to use same redis connection, you would have to switch databases through redis SELECT command

Read More

Synchronization between Context.openFileInput and Context.openFileOutput

Leave a Comment

I have an Android Service running daily which does some data synchronization. Once a day it downloads a file and caches it to disk via context.openFileOutput:

String fileName = Uri.parse(url).getLastPathSegment(); try (FileOutputStream outputStream =      context.openFileOutput(fileName, Context.MODE_PRIVATE)) {    outputStream.write(bytes);    // ... } catch (IOException e) {    // logging ... } 

This happens on a background thread. I also have a UI which contains a WebView. The WebView uses those cached resources if they are available via context.openFileInput:

@Override public WebResourceResponse shouldInterceptRequest(     WebView view, WebResourceRequest request) {   String url = request.getUrl().toString();   if (shouldUseCache(url)) {      try {         return new WebResourceResponse(              "video/webm",              Charsets.UTF_8.name(),              context.openFileInput(obtainFileName(url)));      } catch (IOException e) {        // If a cached resource fails to load, just let the WebView load it the normal way.        // logging ...      }   }   return super.shouldInterceptRequest(view, request); } 

This happens on another background thread independently from the service.

Can I rely on Context implementation and be sure that file reads and writes are safe, or do I have to take care of the synchronization myself? E.g. if the Service is currently writing data to a file and the WebView is trying to access it, will I run into a problem? If so, how should I implement the synchronization?

1 Answers

Answers 1

If the Service is currently writing data to a file and the WebView is trying to access it, will I run into a problem?

In such cases you can write data to a file by appending something to file name and changing its name back once download is finished. e.g.

context.openFileOutput(fileName + ".downloading", Context.MODE_PRIVATE))

and later once download is finished rename the file to original fileName. I am sure you check for file presence in shouldUseCache(url) so it will keep working normally. This will avoid situations where a file is still downloading while you try to read it.

Read More

Wednesday, May 30, 2018

How To Delete Cache Using Accessibility Service in Android?

Leave a Comment

I'm working on cache cleaner app, after doing research on google I found that Android system has moved "CLEAR_APP_CACHE" permission to "signature, privileged" state. So I'm unable clear cache with freeStorageAndNotify method.

Apps on google playstore like CCleaner, Power Clean etc.. are using Accessibility Service To Delete Cache.

I have also created basic accessibility service for my app, but don't know how to delete cache of apps

1 Answers

Answers 1

You can get a list of installed apps and delete cache like:

    public static void clearALLCache()             {                 List<PackageInfo> packList = getPackageManager().getInstalledPackages(0);             for (int i=0; i < packList.size(); i++)             {                 PackageInfo packInfo = packList.get(i);                 if (  (packInfo.applicationInfo.flags & ApplicationInfo.FLAG_SYSTEM) == 0)                 {                     String appName = packInfo.applicationInfo.loadLabel(getPackageManager()).toString();                     try {                         // clearing app data     //                    Runtime runtime = Runtime.getRuntime();     //                    runtime.exec("pm clear "+packInfo.packageName);                         Context context = getApplicationContext().createPackageContext(packInfo.packageName,Context.CONTEXT_IGNORE_SECURITY);                         deleteCache(context);                      } catch (Exception e) {                         e.printStackTrace();                     }                 }             }             }  public static void deleteCache(Context context) {         try {             File dir = context.getCacheDir();             deleteDir(dir);         } catch (Exception e) {}     }      public static boolean deleteDir(File dir) {         if (dir != null && dir.isDirectory()) {             String[] children = dir.list();             for (int i = 0; i < children.length; i++) {                 boolean success = deleteDir(new File(dir, children[i]));                 if (!success) {                     return false;                 }             }             return dir.delete();         } else if(dir!= null && dir.isFile()) {             return dir.delete();         } else {             return false;         }     } 

It is equivalent to clear data option under Settings --> Application Manager --> Your App --> Clear data. (In comment)

For making your application support this you should also follow Privileged Permission Whitelisting from google

Read More

Thursday, February 22, 2018

Microsoft.Extensions.Caching.Redis select different database than db0

Leave a Comment

a question on understanding which redis database is used and how it can be configured.

i have a default ASP.NET Core Web Application and a default configured local redis-server (containing 15 databases)

enter image description here

Over Package Management Console i have installed:

Install-Package Microsoft.Extensions.Caching.Redis 

Redis is configured in Startup.cs like this:

public void ConfigureServices(IServiceCollection services) {     services.AddMvc();      services.AddDistributedRedisCache(option =>     {         option.Configuration = "127.0.0.1";         option.InstanceName = "master";     }); } 

The code to read and write values into the cache is taken from the docs:

var cacheKey = "TheTime"; var existingTime = _distributedCache.GetString(cacheKey); if (!string.IsNullOrEmpty(existingTime)) {     return "Fetched from cache : " + existingTime; } else {     existingTime = DateTime.UtcNow.ToString();     _distributedCache.SetString(cacheKey, existingTime);     return "Added to cache : " + existingTime; } 

But this code only uses the default database db0 no matter what i configure.

E.g. using this configuration:

services.AddDistributedRedisCache(option => {     option.Configuration = "127.0.0.1";     option.InstanceName = "db6"; }); 

leads to:

enter image description here

What do i have to configure to use e.g. db6?

Do i have to use Stackexchange.Redis for this?

1 Answers

Answers 1

Microsoft.Extensions.Caching.Redis is using Stackexchange.Redis to connect to Redis.

The Configuration string is documented on StackExchange.Redis. That said, you should be able to do:

services.AddDistributedRedisCache(option => {     option.Configuration = "127.0.0.1;defaultDatabase=4";     option.InstanceName = "master"; }); 
Read More

Monday, January 15, 2018

Volley using StringRequest not calling getParams for sending POST Request Parameters after 1st time

Leave a Comment

I am facing a problem where my POST request parameters are not going to server after 1st time. I know Volley is using cache mechanism for responses, but in my case my request parameter values can be changed at runtime as i am using pagination in Recyclerview.

So my questions is how can i send Post request parameter every time and wont loose cache mechanism of volley.

I have tried using below ones and get my things done (calling getParams() every-time).. but it loses caches response and i don't want that.

requestQueue.getCache().clear();

stringRequest.setShouldCache(false);

Also have Searched Google and below links but cant find any proper solution. below are the SO links

Below is my code:

        StringRequest stringRequest = new StringRequest(Request.Method.POST, url, new Response.Listener<String>() {             @Override             public void onResponse(String response) {                  Log.e("RES", response);                   GsonBuilder gsonBuilder = new GsonBuilder();                 gsonBuilder.setDateFormat("M/d/yy hh:mm a"); //Format of our JSON dates                 Gson gson = gsonBuilder.create();                  NewsFeedPOJO resultObj = (NewsFeedPOJO) gson.fromJson(response, (Class) NewsFeedPOJO.class);                  inCurrPage = Integer.parseInt(resultObj.getPagination().getCurrent_page());                 inTotalPage = Integer.parseInt(resultObj.getPagination().getTotal_pages());                 inCurrPage++;                  arrayList.addAll(resultObj.getNewsFeedList());                 if (isFtym) {                     isFtym = false;                     layoutManager = new LinearLayoutManager(MainActivity.this);                     rcNewsFeed.setLayoutManager(layoutManager);                     adapter = new NewsFeedAdapter(MainActivity.this, arrayList);                     rcNewsFeed.setAdapter(adapter);                 } else {                     adapter.notifyItemInserted(arrayList.size());                     adapter.notifyDataSetChanged();                 }              }          }, new Response.ErrorListener() {             @Override             public void onErrorResponse(VolleyError error) {              }         }) {             @Override             protected Map<String, String> getParams() throws AuthFailureError {                 Map<String, String> map = new HashMap<>();                 map.put("user_id", "188");                  if (inCurrPage == 0)                     map.put("page", "1");                 else {                     map.put("page", "" + inCurrPage);                 }                  Log.e("RES", inCurrPage + "  PARA");                 return map;             }         };         //RequestQueue requestQueue = Volley.newRequestQueue(MainActivity.this);         //requestQueue.add(stringRequest);         //requestQueue.getCache().clear();          //AppController.getInstance().addToRequestQueue(stringRequest);        // stringRequest.setShouldCache(false);         VolleySingleton.getInstance(this).addToRequestQueue(stringRequest); 

using below Volley Dependency.

compile 'com.android.volley:volley:1.1.0' 

If need more information please do let me know. Thanks in advance. Your efforts will be appreciated.

1 Answers

Answers 1

Did you checked your Volley Singleton is correct or not ?

import android.content.Context; import android.graphics.Bitmap; import android.util.LruCache;  import com.android.volley.Request; import com.android.volley.RequestQueue; import com.android.volley.toolbox.ImageLoader; import com.android.volley.toolbox.Volley;  public class VolleySingleton { private static AppSingleton mAppSingletonInstance; private RequestQueue mRequestQueue; private ImageLoader mImageLoader; private static Context mContext;  private AppSingleton(Context context) {     mContext = context;     mRequestQueue = getRequestQueue();      mImageLoader = new ImageLoader(mRequestQueue,             new ImageLoader.ImageCache() {                 private final LruCache<String, Bitmap>                         cache = new LruCache<String, Bitmap>(20);                  @Override                 public Bitmap getBitmap(String url) {                     return cache.get(url);                 }                  @Override                 public void putBitmap(String url, Bitmap bitmap) {                     cache.put(url, bitmap);                 }             }); }  public static synchronized AppSingleton getInstance(Context context) {     if (mAppSingletonInstance == null) {         mAppSingletonInstance = new AppSingleton(context);     }     return mAppSingletonInstance; }  public RequestQueue getRequestQueue() {     if (mRequestQueue == null) {         // getApplicationContext() is key, it keeps you from leaking the         // Activity or BroadcastReceiver if someone passes one in.         mRequestQueue = Volley.newRequestQueue(mContext.getApplicationContext());     }     return mRequestQueue; }  public <T> void addToRequestQueue(Request<T> req,String tag) {     req.setTag(tag);     getRequestQueue().add(req); }  public ImageLoader getImageLoader() {     return mImageLoader; }  public void cancelPendingRequests(Object tag) {     if (mRequestQueue != null) {         mRequestQueue.cancelAll(tag);     } } } 

Or maybe there is other in your code.

Read More

Wednesday, January 10, 2018

logout not working, caching on nginx, how to allow logout?

Leave a Comment

I have everything cached, if I logged into my account, you will not be able to log out any more) how do you get out when you quit? i need to know how to delete cookies and session! when i'll logout!

P.S. if i'll disable caching on nginx level, everything works fine, problem in nginx

nginx conf

    gzip on;     gzip_disable "msie6";      gzip_vary on;     gzip_proxied any;     gzip_comp_level 6;     gzip_buffers 16 8k;     gzip_http_version 1.1;     gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;      proxy_connect_timeout 5;     proxy_send_timeout 10;     proxy_read_timeout 10;     proxy_buffering on;     proxy_buffer_size 16k;     proxy_buffers 24 16k;     proxy_busy_buffers_size 64k;     proxy_temp_file_write_size 64k;      proxy_temp_path /tmp/nginx/proxy_temp;     add_header X-Cache-Status $upstream_cache_status;     proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=first_zone:100m;     proxy_cache one;     proxy_cache_valid any 30d;     proxy_cache_key $scheme$proxy_host$request_uri$cookie_US; 

server conf

upstream some site {   server unix:/webapps/some/run/gunicorn.sock fail_timeout=0; }  server {     listen   80;     server_name server name;     expires 7d;     client_max_body_size 4G;      access_log /webapps/some/logs/nginx-access.log;     error_log /webapps/some/logs/nginx-error.log;     error_log /webapps/some/logs/nginx-crit-error.log crit;     error_log /webapps/some/logs/nginx-debug.log debug;      location /static/ {         alias   /webapps/some/static/;     }      location /media/ {         alias   /webapps/some/media/;     }     location ~* ^(?!/media).*.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {         root root_path;         expires 7d;         add_header Pragma public;         add_header Cache-Control "public, must-revalidate, proxy-revalidate";         access_log off;     }         location ~* ^(?!/static).*.(?:css|js|html)$ {         root root_path;         expires 7d;         add_header Pragma public;         add_header Cache-Control "public, must-revalidate, proxy-revalidate";         access_log off;     }           location / {         proxy_set_header X-Real-IP $remote_addr;         proxy_cache one;         proxy_cache_min_uses 1;         proxy_cache_use_stale error timeout;         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;         # proxy_set_header X-Forwarded-Proto https;         proxy_set_header Host $http_host;         proxy_redirect off;         if (!-f $request_filename) {             proxy_pass http://some;             break;         }     }     error_page 404 /404.html;     location = /error_404.html {         root /webapps/some/src/templates;     }      error_page  500 502 503 504 /500.html;     location = /error_500.html {         root /webapps/some/src/templates;     } } 

2 Answers

Answers 1

Instead of logging out with a GET request, change your logout view to accept a form POST.

POST requests should not be cached.

This has the added security benefit of preventing users from being logged out with iframes or malicious links (ie: https://example.com/logout/, assuming you have not disabled django's CSRF protection).

Answers 2

You have the following question:

i need to know how to delete cookies and session! when i'll logout!

With the following code:

proxy_cache_key $scheme$proxy_host$request_uri$cookie_US; 

We first have to know what's in $cookie_US?

  • If it's simply the name of the login, then you need to realise that anyone who knows the name of the login, and sets their own cookie as such, and knows the complete URL of a hidden resource that such user (and only such user) has access to, and which has been accessed recently (thus freshly cached), can now gain ‘unauthorised’ access to the given resource, since it'll be served straight from cache, and likely without any sort of re-validation.

Basically, for caching user-specific content, you have to make sure that you set http://nginx.org/r/proxy_cache_key to represent an actually secret non-guessable value, which could then be cleared on the user's end to logout. Subsequently, if the user does logout, then your cache is still subject to the replay attacks by anyone who somehow still posesses such secret value, but it'd usually be minimised by a short expiration time of the cache, plus, the secret is still supposed to stay a secret even after logout.

And clearing the session is as easy as simply re-setting the variable to something that wouldn't be giving the access to the user, e.g., you can even implement the whole logout thing entirely within nginx, too:

proxy_cache_key $scheme$proxy_host$request_uri$cookie_US; location /logout {     add_header Set-Cookie "US=empty; Expires=Tue, 19-Jan-2038 03:14:07 GMT; Path=/";     return 200 "You've been logged out!"; } 

P.S. Note that above code technically opens you up to XSS attacks — any other page can simply embed an iframe with /logout on your site, and your users would be logged out. Ideally, you might want to use a confirmation of logout, or check $http_referer to ensure the link is clicked from your own site.

Read More

Thursday, November 30, 2017

Caching reverse proxy for dynamic content

Leave a Comment

I was thinking about asking on Software Recommendations, but then I've found out that it may be a too strange request and it needs some clarification first.

My points are:

  • Each response contains an etag
    • which is a hash of the content
    • and which is globally unique (with sufficient probability)
  • The content is (mostly) dynamic and may change anytime (expires and max-age headers are useless here).
  • The content is partly user-dependent, as given by the permissions (which itself change sometimes).

Basically, the proxy should contain a cache mapping the etag to the response content. The etag gets obtained from the server and in the most common case, the server does not deal with the response content at all.

It should go like follows: The proxy always sends a request to the server and then either

  • 1 the server returns only the etag and the proxy makes a lookup based on it and
    • 1.1 on cache hit,
      • it reads the response data from cache
      • and sends a response to the client
    • 1.2 on cache miss,
      • it asks the server again and then
      • the server returns the response with content and etag,
      • the proxy stores it in its cache
      • and sends a response to the client
  • 2 or the server returns the response with content and etag,
    • the proxy stores the data in its cache
    • and sends a response to the client

For simplicity, I left out the handling of the if-none-match header, which is rather obvious.

My reason for this is that the most common case 1.1 can be implemented very efficiently in the server (using its cache mapping requests to etags; the content isn't cached in the server), so that most requests can be handled without the server dealing with the response content. This should be better than first getting the content from a side cache and then serving it.

In case 1.2, there are two requests to the server, which sounds bad, but is no worse than the server asking a side cache and getting a miss.

Q1: I wonder, how to map the first request to HTTP. In case 1, it's like a HEAD request. In case 2, it's like GET. The decision between the two is up to the server: If it can serve the etag without computing the content, then it's case 1, otherwise, it's case 2.

Q2: Is there a reverse proxy doing something like this? I've read about nginx, HAProxy and Varnish and it doesn't seem to be the case. This leads me to Q3: Is this a bad idea? Why?

Q4: If not, then which existing proxy is easiest to adapt?

An Example

A GET request like /catalog/123/item/456 from user U1 was served with some content C1 and etag: 777777. The proxy stored C1 under the key 777777.

Now the same request comes from user U2. The proxy forwards it, the server returns just etag: 777777 and the proxy is lucky, finds C1 in its cache (case 1.1) and sends it to U2. In this example, neither the clients not the proxy knew the expected result.

The interesting part is how could the server know the etag without computing the answer. For example, it can have a rule stating that requests of this form return the same result for all users, assuming that the given user is allowed to see it. So when the request from U1 came, it computed C1 and stored the etag under the key /catalog/123/item/456. When the same request came from U2, it just verified that U2 is permitted to see the result.

1 Answers

Answers 1

Q1: It is a GET request. The server can answer with an "304 not modified" without body.

Q2: openresty (nginx with some additional modules) can do it, but you will need to implement some logic yourself (see more detailed description below).

Q3: This sounds like a reasonable idea given the information in your question. Just some food for thought:

  • You could also split the page in user-specific and generic parts which can be cached independently.

  • You shouldn't expect the cache to keep the calculated responses forever. So, if the server returns a 304 not modified with etag: 777777 (as per your example), but the cache doesn't know about it, you should have an option to force re-building the answer, e.g. with another request with a custom header X-Force-Recalculate: true.

  • Not exactly part of your question, but: Make sure to set a proper Vary header to prevent caching issues.

  • If this is only about permissions, you could maybe also work with permission infos in a signed cookie. The cache could derive the permission from the cookie without asking the server, and the cookie is tamper proof due to the signature.

Q4: I would use openresty for this, specifically the lua-resty-redis module. Put the cached content into a redis key-value-store with the etag as key. You'd need to code the lookup logic in Lua, but it shouldn't be more than a couple of lines.

Read More

Saturday, September 2, 2017

Cache busting in a Google Chrome Web Application

Leave a Comment

We are currently using Webpack with the HtmlWebpackPlugin to generate our javascript builds for our webpage.

new HtmlPlugin({     template: 'www/index-template.html',                //source path - relative to project root     filename: 'index.html',                             //output path - relative to outpath above     hash: true,     cache: true                                         //only emit new bundle if changed }), 

This causes a hash to be added to the query string of the bundled javascript file.

<script type="text/javascript" src="/build/vendor.min.js?4aacccd01b71c61e598c"></script><script type="text/javascript" src="/build/client.min.js?4aacccd01b71c61e598c"></script> 

When using any standard desktop or mobile browser, new builds are cache busted properly and the new version of the site is loaded without any effort from the user. However, we also have a chrome web app implementation where we call:

chrome.exe --app=http://localhost:65000 --disable-extensions

In this application, for some reason the hash on the end of the javascript build doesn't bust the cache. We have to manually right click somewhere on the page, then click reload (or press F5). For some reason the cache isn't busted in the web application.

I was thinking that possibly it is caching the index.html file maybe? That may cause the app to never receive the updated hash on the build. I'm not sure how to solve that issue though if that is the case.

I have also noticed that if our localhost server is down, the page still loads as if the server were running. This indicates to me some kind of offline cache. I checked the manifest.json parameters and can't find anything to force a reload.

I have also tried these chrome command line switches which did not help either: --disk-cache-size=0, --aggressive-cache-discard, --disable-offline-auto-reload.

Another caveat is that we need to retain the localStorage data and their cookies. In a standard browser window, or any browser for that matter it works just fine, but not when it is inside a Chrome web app.

2 Answers

Answers 1

Are you talking "Progressive Web App" with service workers? If so then the html file can (and should) be cached on first download. You need to have some sort of aggressive update process on the client to ensure new files are loaded properly.

Perhaps having an api call that checks some sort of dirty flag on the server could work, and if it comes back true, it should reload the template files. Or something more complex where it gets an array of dirty files from the server so it knows which ones to reload instead of loading everything. Just some ideas.

Answers 2

As your page works without the server running at localhost, I suspect that your app is offline first. This is done exactly through service workers(as pointed out by @Chad H) which are officially supported by Chrome and are experimental in other browsers. So, expect different behavior in other browsers. To bust the cache,

In Production

For a permanent solution, you to find and modify the service worker (SW) code. Deletion of old caches happens only in activate event of SW.

You can also read more about Service worker and ask a question with the updated SW code. Also, check out this resolved issue that faced a problem similar to yours.

For dev setup

You can use the Disable Cache option under Network tab in Chrome DevTools (works only when DevTools is open) or use a more robust chrome extension called Cache Killer.

Read More

Sunday, August 27, 2017

Clear Cache in iOS: Deleting Application Data of Other Apps

Leave a Comment

Recently, I have came across many apps which "Clear Cache" on iPhone. They also specify that you may lose some saved data and temp files.

What I know is that Apple doesn't allows you to access data of other Apps neither directory. So, how they are cleaning cache data? Can anyone put some light on it?

Reference: Magic Phone Cleaner

Power Clean

4 Answers

Answers 1

They simply fill the free space on iPhone temporarily with random data leaving the system with no free space at all.

This forces iOS to clear all temp data, caches and iCloud Photos -if you enabled storage optimization- to clear space. So basically they are tricking the system to force it to clear temp and cached data.

Answers 2

No app can access anything outside of it's sandbox environment. In other words, technically it's impossible to clean cache on an iPhone unless it's jailbroken. Most of these apps doesn't do what they say, they just give an illusion to user. Loading up the memory can force iOS to terminate other apps in the background but, I it's unlikely that it will give any performance boost.

Answers 3

I read the disassembly of the Magic Phone Cleaner app and here what the app do:

NSArray *path = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, true); NSError *error = nil;  NSDictionary *attributes = [[NSFileManager defaultManager] attributesOfItemAtPath:[path lastObject] error:&error];  if (!error) {     // get the free space on the user partition     unsigned long long freeSize = [attributes[NSFileSystemFreeSize] unsignedLongLongValue];      // get the path for "cacheWipe.txt" on the app temporary files directory     NSString *dummyFilePath = [[NSURL fileURLWithPath:[NSTemporaryDirectory() stringByAppendingString:@"cacheWipe.txt"]] path];      NSFileHandle *handle = [NSFileHandle fileHandleForReadingAtPath:dummyFilePath];     // seek is used to fill the file with 0s     [handle seekToFileOffset:freeSize];      // create data and write it to the file above     NSData *data = [@"foofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoo" dataUsingEncoding:NSUnicodeStringEncoding];     [handle writeData:data];      // delete the file from the the app temporary files directory     [[NSFileManager defaultManager] removeItemAtPath:dummyFilePath error:&error];  }else{      NSLog(@"Error Obtaining System Memory Info: Domain = %@, Code = %ld", [error domain], [error code]); } 

Please note that the iPhone, iPod or iPad have 2 partitions, User Partition and System Partition. The system partition is read only and its mounted to / while the user partition is read write and mounted to /private/var

This code try to get the free space on the user partition and then create a file with the same size as the free space on the user partition, this is what seekToFileOffset do. Then the end of that file will be filled with foofoo... to commit changes to storage.

Answers 4

It doesn't clear data in external apps, it clears external data left by apps. Clearing the cache basically deletes all the temporary files.

Read More

Thursday, August 10, 2017

Android cache background process is increasing continuously

Leave a Comment

In my android application named "PriceDekho" the cache background process takes too much space. First when i start the application the application takes arround 30MB (I think its OK) but the problem is when i surfing the application pages then this size is increasing continuously upto (200 MB). This appliction "Cache background process" also varying with the mobile RAM size. If the mobile RAM size is 2GB then this application "Cache background process" size goes upto 500 to 700 MB.

My application is having only 5 to 6 screens. I just need to stabilize the cached background size.

How can i clear the application cached size? Please help.

enter image description here

enter image description here

2 Answers

Answers 1

It sounds like you have memory leaks which the Garbage Collector can not remove. For example if you need to have a reference to the Context from a non-context class which is never released there. Most of the time Android Studio will point those out but you can try to use LeakCanary and search for them.

Add the dependency

dependencies {    debugCompile 'com.squareup.leakcanary:leakcanary-android:1.5.1'  } 

and use it in your Application class. Create one if you don't have it already.

public class ExampleApplication extends Application {    @Override public void onCreate() {     super.onCreate();     if (LeakCanary.isInAnalyzerProcess(this)) {       // This process is dedicated to LeakCanary for heap analysis.       // You should not init your app in this process.       return;     }     LeakCanary.install(this);     // Normal app init code...   } } 

Answers 2

Cache background process surely varies with the RAM size as the system will allow more data to be cached if more memory is available.

However, the increasing pattern in memory usage by the application may either be due to memory leaks or huge objects being created in the background. The Garbage collector is not an all weather friend and may tend to ignore memory leaks due to programmatic mistakes.

As there is no code posted for review, some trivial questions to ask would be:

  • Is there a lot of image processing being done using bitmaps? If yes, are those bitmaps being recycled ?
  • Is your application using the context judiciously, that is avoiding usage of the application context, unless required ?
  • Are the listeners being unregistered, if any ?

    As suggested above, LeakCanary will surely be much useful in this case, surely more than the Android Monitor.

Read More

Thursday, July 13, 2017

How can cached value be invalidated in single thread?

Leave a Comment

I use the Http.Current.Cache to store various values retrieved from my database because my app is to data-intensive. When running my website on my new laptop with VS2017 install (on another laptop with VS2015 I never see this problem), I'm seeing a very strange issue where cached values seem to be randomly cleared- almost in a way to defies logic.

For instance, I have an if clause whose condition is that the cache item in question is not null. My code is definitely following the path through this if statement, but a couple of statements later the debugger shows that the cache item is in fact null- causing my app to fail.

public static SportSeason GetCurrentSportSeason(string sportCode) {   SportSeason sportSeason = null;   string key = "Sheets_SportSeason_" + sportCode;    int i = 0;   if (BaseSheet.Settings.EnableCaching && BizObject.Cache[key] != null)   {     i = 1;     sportSeason = (SportSeason)BizObject.Cache[key];   }   else   {     i = 2;     sportSeason = GetSportSeasonFromSportSeasonDetails(SiteProvider.Sheets.GetCurrentSportSeason(sportCode));     BaseSheet.CacheData(key, sportSeason);   }   if(sportSeason == null)   {     int j = i;   }   return sportSeason; } 

I can set a breakpoint in the final if and the variable i is set to 1, but the sportSeason object is NULL (as is the cache entry). How can this be when the only way the code could have entered the first if clause is if the cache item was not null?

Here is a screenshot showing some watch variables with a breakpoint in the final if.enter image description here

This is tough to track down because it happens randomly throughout my business objects. Sometimes I have to refresh the page 3 or 4 times before I see this issues.

How can the cache be invalidated so quickly, and by what? The screenshot shows that there aren't many cached items, so I don't think I'm running out of memory. And no other processes are running which could be clobbering the cache.

EDIT: Through more debugging I've determined that only the cache key that is checked on Line 91 is cleared (set to null) when it is checked again at the breakpoint. All of the other cache records are still there.

EDIT2: I've isolated the problem down to this, although it still seems to defy logic:

    HttpContext.Current.Cache.Insert("ABC", "123", null, DateTime.Now.AddSeconds(600), TimeSpan.Zero);     int i = 0; 

I clear all my cache. When I step over the Cache.Insert statement, my cache count goes to 1. However, the cached item according to that key remains null (see watch window).

enter image description here

Also, if I execute one more statement (the int i = 0), the cache count goes back to zero.

enter image description here

Edit3: It's the AbsoluteExpiration parameter of the Insert() that is killing me. Somehow a clock is off.

If I set the AbsoluteExpiration to 5 hours into the future (300 minutes), it doesn't work. If I set it to 5 hours and one minute (301 minutes) into the future, everything works.

    HttpContext.Current.Cache.Insert("ABC", "123", null, DateTime.Now.AddMinutes(301), TimeSpan.Zero); 

The clock on my laptop is 100% accurate and you can see the Intellisense shows the correct time as well. Can the cache be based off of some other clock that is 5 hours off?

enter image description here

Edit4: Looks like someone else is off by 5 hours.

3 Answers

Answers 1

The fix is to always use DateTime.UtcNow instead of DateTime.Now when specifying the AbsoluteExporation parameter.

Do this:

HttpContext.Current.Cache.Insert("ABC", "123", null, DateTime.UtcNow.AddMinutes(1), System.Web.Caching.Cache.NoSlidingExpiration); 

Not this:

HttpContext.Current.Cache.Insert("ABC", "123", null, DateTime.Now.AddMinutes(1), System.Web.Caching.Cache.NoSlidingExpiration); 

When I do this, it correctly recognizes that the cache is valid and I can retrieve the value. As to why this is required on my Windows 10 Pro laptop but not on my Windows 10 Home, I have no idea.

Answers 2

You can add a callback function that can tell you why an item was removed:

https://msdn.microsoft.com/en-us/library/7kxdx246.aspx

You specify the callback when adding an item, and it will be fired when the item is removed. Put a breakpoint or logging output in there to print out the CacheItemRemovedReason argument.

If that doesn't help you, you might try the debugging steps laid out in the top answer to ASP.NET HttpContext Cache removes right after insertion. This may catch if the app is terminating.

Answers 3

Review your IIS application pool recycling options to rule out a recycle event. Because an application pool recycle, will clear your cache.

Per the App pool recycling settings docs:

You can specify that IIS recycle an application pool at set intervals (such as every 180 minutes), at a specific time each day, or after the application pool receives a certain number of requests. You can also configure the element to restart the application pool when the worker process virtual memory and physical memory usage reaches a specific threshold.

Even changing your config can recycle the app pool.

Ensure that no other apps are sharing your app pool and then turn on recycle logging and reproduce the issue. See if the cache disappearance is correlated with an app pool recycle in the log. If so, go from there to determine why the recycle is happening and then make a configuration change so app pool doesn't recycle during normal operating hours.

Read More

Sunday, May 21, 2017

Get HttpRequestMessage from ActionFilter or ASP.NET MVC Web Controller outside of Web API

Leave a Comment

I am having a tuff time trying to get an instace of a HttpRequestMessage so I can pass it to the method GetCacheOutputProvider below from an ActionFilter and/or normal ASP.NET MVC Controller. I know I can from the Web API, but what about these instances.

public class CacheResetFilter : ActionFilterAttribute     {         public override void OnActionExecuted(ActionExecutedContext filterContext)         {             var cache = GlobalConfiguration.Configuration.CacheOutputConfiguration().GetCacheOutputProvider(HTTPREQUESTMESSAGE);                 cache.Contains("eventid=" + eventId);              base.OnActionExecuted(filterContext);         } 

2 Answers

Answers 1

1.In a MVC Controller you can do like:

public class HomeController : Controller {    public ActionResult Test()         {             HttpRequestMessage httpRequestMessage =                 HttpContext.Items["MS_HttpRequestMessage"] as HttpRequestMessage;             return View();                 }  } 

2.In action filter you can do like :

public class HttpRequestMessageAttribute : System.Web.Mvc.ActionFilterAttribute {     public override void OnActionExecuted(System.Web.Mvc.ActionExecutedContext filterContext)     {         HttpRequestMessage httpRequestMessage =             filterContext.HttpContext.Items["MS_HttpRequestMessage"] as HttpRequestMessage;         //var cache = GlobalConfiguration.Configuration.CacheOutputConfiguration().GetCacheOutputProvider(httpRequestMessage);         //cache.Contains("eventid=" + eventId);          base.OnActionExecuted(filterContext);     } } 

OR

    public class HttpRequestMessageAttribute : ActionFilterAttribute     {         public override void OnActionExecuting(ActionExecutingContext filterContext)         {             HttpRequestMessage httpRequestMessage =                 filterContext.HttpContext.Items["MS_HttpRequestMessage"] as HttpRequestMessage;              base.OnActionExecuting(filterContext);         }    } 

Hopefully it's help for you.

Answers 2

I don't think there is a simple way. You want an instance of HttpRequestMessage class which sematically represents a (current) request to WebAPI. But you are not inside WebAPI and don't handle any WebAPI requests. Thus it is logical that you can't easily have a valid instance of HttpRequestMessage (if you could, what URL would it point to?). IMHO the most obvious way to work this around is to use RegisterCacheOutputProvider method from CacheOutputConfiguration to inject your own cache provider that would return an instance of IApiOutputCache that you can directly access using other means (such as globally visible singleton). It looks there is only one standard implementation of IApiOutputCache: MemoryCacheDefault. So it looks like if you return it from your registered provider, you'll be OK.

If you want to be a more hacky, it looks like all MemoryCacheDefault instances internally use the same shared (static) field to do the actual work so you probably can just create new MemoryCacheDefault in your filter or controller and still be OK for now, but to me this sounds way to hacky comparing to the alternative from the first part of my answer.

Read More

Friday, April 28, 2017

How do I use a Rails cache to store Nokogiri objects?

Leave a Comment

I'm using Rails 5 to use a Rails cache to store Nokogiri objects.

I created this in config/initializers/cache.rb:

$cache = ActiveSupport::Cache::MemoryStore.new 

and I wanted to store documents like:

$cache.fetch(url) {   result = get_content(url, headers, follow_redirects) } 

but I'm getting this error:

Error during processing: (TypeError) no _dump_data is defined for class Nokogiri::HTML::Document /Users/davea/.rvm/gems/ruby-2.4.0/gems/activesupport-5.0.2/lib/active_support/cache.rb:671:in `dump' /Users/davea/.rvm/gems/ruby-2.4.0/gems/activesupport-5.0.2/lib/active_support/cache.rb:671:in `dup_value!' /Users/davea/.rvm/gems/ruby-2.4.0/gems/activesupport-5.0.2/lib/active_support/cache/memory_store.rb:128:in `write_entry' /Users/davea/.rvm/gems/ruby-2.4.0/gems/activesupport-5.0.2/lib/active_support/cache.rb:398:in `block in write' /Users/davea/.rvm/gems/ruby-2.4.0/gems/activesupport-5.0.2/lib/active_support/cache.rb:562:in `block in instrument' /Users/davea/.rvm/gems/ruby-2.4.0/gems/activesupport-5.0.2/lib/active_support/notifications.rb:166:in `instrument' /Users/davea/.rvm/gems/ruby-2.4.0/gems/activesupport-5.0.2/lib/active_support/cache.rb:562:in `instrument' /Users/davea/.rvm/gems/ruby-2.4.0/gems/activesupport-5.0.2/lib/active_support/cache.rb:396:in `write' /Users/davea/.rvm/gems/ruby-2.4.0/gems/activesupport-5.0.2/lib/active_support/cache.rb:596:in `save_block_result_to_cache' /Users/davea/.rvm/gems/ruby-2.4.0/gems/activesupport-5.0.2/lib/active_support/cache.rb:300:in `fetch' /Users/davea/Documents/workspace/myproject/app/helpers/webpage_helper.rb:116:in `get_cached_content' /Users/davea/Documents/workspace/myproject/app/helpers/webpage_helper.rb:73:in `get_url' /Users/davea/Documents/workspace/myproject/app/services/abstract_my_object_finder_service.rb:29:in `process_data' /Users/davea/Documents/workspace/myproject/app/services/run_crawlers_service.rb:26:in `block (2 levels) in run_all_crawlers' /Users/davea/.rvm/gems/ruby-2.4.0/gems/concurrent-ruby-1.0.5/lib/concurrent/executor/ruby_thread_pool_executor.rb:348:in `run_task' /Users/davea/.rvm/gems/ruby-2.4.0/gems/concurrent-ruby-1.0.5/lib/concurrent/executor/ruby_thread_pool_executor.rb:337:in `block (3 levels) in create_worker' /Users/davea/.rvm/gems/ruby-2.4.0/gems/concurrent-ruby-1.0.5/lib/concurrent/executor/ruby_thread_pool_executor.rb:320:in `loop' /Users/davea/.rvm/gems/ruby-2.4.0/gems/concurrent-ruby-1.0.5/lib/concurrent/executor/ruby_thread_pool_executor.rb:320:in `block (2 levels) in create_worker' /Users/davea/.rvm/gems/ruby-2.4.0/gems/concurrent-ruby-1.0.5/lib/concurrent/executor/ruby_thread_pool_executor.rb:319:in `catch' /Users/davea/.rvm/gems/ruby-2.4.0/gems/concurrent-ruby-1.0.5/lib/concurrent/executor/ruby_thread_pool_executor.rb:319:in `block in create_worker' 

What do I need to do in order to be able to store these objects in a cache?

2 Answers

Answers 1

Store the xml as string, not the object and parse them once you get them out of the cache.

Edit: response to comment

Cache this instead

nokogiri_object.to_xml 

Edit2: response to comment. Something along this lines. You will need to post more code if you want more specific help.

nokogiri_object = Nokogiri::XML(cache.fetch('xml_doc')) 

Edit3: Response to 'Thanks but what is the code for "Store serialized object in cache"? I thought the body of the "$cache.fetch(url) {" would take care of storing and then retrieiving things?'

cache.write('url', xml_or_serialized_nokogiri_string) 

Answers 2

User Nokogiri's Serialize functionality:

$cache = ActiveSupport::Cache::MemoryStore.new  noko_object = Nokogiri::HTML::Document.new  serial = noko_object.serialize $cache.write(url, serial) // Serialized Nokogiri document is now in store at the URL key. result = $cache.read(url)  noko_object = Nokogiri::HTML::Document.new(result) // noko_object is now the original document again :) 

Check out the documentation here for more information.

Read More

Friday, March 10, 2017

Laravel occasionally failing to read cache

Leave a Comment

I am using laravel caching (the remember() method) on a website with a code like this:

$postedItems = Cache::remember('home_posted_items', $this->cacheTimes['postedItems'], function() {      /* the stuff that prepares data */      return ['items' => $items, 'firstItemNumber' => $firstItem]; }); 

The problem is that sometimes (every few days, I'd say) cached file seems to become corrupted and as a result I have downtime until the cache expires (unless I clear it manually).

Here is a part of the error stack that might be relevant:

[2017-02-04 22:01:34] production.ERROR: ErrorException: unserialize(): Error at offset 131059 of 131062 bytes in /home/path/to/app/vendor/laravel/framework/src/Illuminate/Cache/FileStore.php:78 Stack trace: #0 [internal function]: Illuminate\Foundation\Bootstrap\HandleExceptions->handleError(8, 'unserialize(): ...', '/home/path/to/...', 78, Array) #1 /home/path/to/app/vendor/laravel/framework/src/Illuminate/Cache/FileStore.php(78): unserialize('a:2:{s:7:"item...') #2 /home/path/to/app/vendor/laravel/framework/src/Illuminate/Cache/FileStore.php(47): Illuminate\Cache\FileStore->getPayload('home_posted_ite...') #3 /home/path/to/app/vendor/laravel/framework/src/Illuminate/Cache/Repository.php(98): Illuminate\Cache\FileStore->get('home_posted_ite...') #4 /home/path/to/app/vendor/laravel/framework/src/Illuminate/Cache/Repository.php(202): Illuminate\Cache\Repository->get('home_posted_ite...') #5 [internal function]: Illuminate\Cache\Repository->remember('home_posted_ite...', 1, Object(Closure)) #6 /home/path/to/app/vendor/laravel/framework/src/Illuminate/Cache/CacheManager.php(318): call_user_func_array(Array, Array) #7 /home/path/to/app/bootstrap/cache/compiled.php(6089): Illuminate\Cache\CacheManager->__call('remember', Array) #8 /home/path/to/app/app/Http/Controllers/HomeController.php(197): Illuminate\Support\Facades\Facade::__callStatic('remember', Array) 

How to solve this problem?

From experience I know that clearing the cache solves the problem. So it seems that the issue is some corruption in files. I think if I could notice "the file is unreadable" and just clear the cache (Cache::forget(...)), it should solve the problem.

What would be the best way to notice such error? It seems that all the logic of retrieving the file is hidden inside the remember() method. Should I just unwrap it and use other methods, something like the following?

if (!($postedItems = @Cache::get('home_posted_items')) {     // prepare data      $postedItems = ['items' => $items, 'firstItemNumber' => $firstItem];      Cache::put('home_posted_items', $postedItems, $this->cacheTimes['postedItems']); } 

2 Answers

Answers 1

IMHO could be a problem with file driver. I think that if you have a good web server able to handle concurrent requests the problem is that the file driver is not so able at handle concurrency.

And that is related to the fact that usually filesystems itself are not so good at handling different concurrent processes reading/writing to same file.

In the end I advise you to switch the driver to something more capable at handling concurrency, i.e Memcached or Redis, but also Database should be good enough.

You can find the same suggestion for session here, look at the second post, and I think that could be relevant for cache file driver too.

Answers 2

It does not seem a permission issue(you can see the #1 in stack trace), somehow laravel is overriding(corrupting) the cache files. You need to find out what content you have at the end of the file.

You also need to check what kind of content you are putting to cache, laravel uses file storage plugin which could have a bug.

And best way to debug that is here. http://stackoverflow.com/a/10152996/3305978

Read More

Wednesday, March 1, 2017

Spring Cache refreshing obsolete values

Leave a Comment

In a Spring-based application I have a service which performs the calculation of some Index. Index is relatively expensive to calculate (say, 1s) but relatively cheap to check for actuality (say, 20ms). Actual code does not matter, it goes along the following lines:

public Index getIndex() {     return calculateIndex(); }  public Index calculateIndex() {     // 1 second or more }  public boolean isIndexActual(Index index) {     // 20ms or less } 

I'm using Spring Cache to cache the calculated index via @Cacheable annotation:

@Cacheable(cacheNames = CacheConfiguration.INDEX_CACHE_NAME) public Index getIndex() {     return calculateIndex(); } 

We currently configure GuavaCache as cache implementation:

@Bean public Cache indexCache() {     return new GuavaCache(INDEX_CACHE_NAME, CacheBuilder.newBuilder()             .expireAfterWrite(indexCacheExpireAfterWriteSeconds, TimeUnit.SECONDS)             .build()); }  @Bean public CacheManager indexCacheManager(List<Cache> caches) {     SimpleCacheManager cacheManager = new SimpleCacheManager();     cacheManager.setCaches(caches);     return cacheManager; } 

What I also need is to check if cached value is still actual and refresh it (ideally asynchronously) if it is not. So ideally it should go as follows:

  • When getIndex() is called, Spring checks if there is a value in the cache.
    • If not, new value is loaded via calculateIndex() and stored in the cache
    • If yes, the existing value is checked for actuality via isIndexActual(...).
      • If old value is actual, it is returned.
      • If old value is not actual, it is returned, but removed from the cache and loading of the new value is triggered as well.

Basically I want to serve the value from the cache very fast (even if it is obsolete) but also trigger refreshing right away.

What I've got working so far is checking for actuality and eviction:

@Cacheable(cacheNames = INDEX_CACHE_NAME) @CacheEvict(cacheNames = INDEX_CACHE_NAME, condition = "target.isObsolete(#result)") public Index getIndex() {     return calculateIndex(); } 

This checks triggers eviction if the result is obsolete and returns the old value immediately even if it is the case. But this does not refresh the value in the cache.

Is there a way to configure Spring Cache to actively refresh obsolete values after eviction?

Update

Here's a MCVE.

public static class Index {      private final long timestamp;      public Index(long timestamp) {         this.timestamp = timestamp;     }      public long getTimestamp() {         return timestamp;     } }  public interface IndexCalculator {     public Index calculateIndex();      public long getCurrentTimestamp(); }  @Service public static class IndexService {     @Autowired     private IndexCalculator indexCalculator;      @Cacheable(cacheNames = "index")     @CacheEvict(cacheNames = "index", condition = "target.isObsolete(#result)")     public Index getIndex() {         return indexCalculator.calculateIndex();     }      public boolean isObsolete(Index index) {         long indexTimestamp = index.getTimestamp();         long currentTimestamp = indexCalculator.getCurrentTimestamp();         if (index == null || indexTimestamp < currentTimestamp) {             return true;         } else {             return false;         }     } } 

Now the test:

@Test public void test() {     final Index index100 = new Index(100);     final Index index200 = new Index(200);      when(indexCalculator.calculateIndex()).thenReturn(index100);     when(indexCalculator.getCurrentTimestamp()).thenReturn(100L);     assertThat(indexService.getIndex()).isSameAs(index100);     verify(indexCalculator).calculateIndex();     verify(indexCalculator).getCurrentTimestamp();      when(indexCalculator.getCurrentTimestamp()).thenReturn(200L);     when(indexCalculator.calculateIndex()).thenReturn(index200);     assertThat(indexService.getIndex()).isSameAs(index100);     verify(indexCalculator, times(2)).getCurrentTimestamp();     // I'd like to see indexCalculator.calculateIndex() called after     // indexService.getIndex() returns the old value but it does not happen     // verify(indexCalculator, times(2)).calculateIndex();       assertThat(indexService.getIndex()).isSameAs(index200);     // Instead, indexCalculator.calculateIndex() os called on     // the next call to indexService.getIndex()     // I'd like to have it earlier     verify(indexCalculator, times(2)).calculateIndex();     verify(indexCalculator, times(3)).getCurrentTimestamp();     verifyNoMoreInteractions(indexCalculator); } 

I'd like to have the value refreshed shortly after it was evicted from the cache. At the moment it is refreshed on the next call of getIndex() first. If the value would have been refreshed right after eviction, this would save me 1s later on.

I've tried @CachePut, but it also does not get me the desired effect. The value is refreshed, but the method is always executed, no matter what condition or unless are.

The only way I see at the moment is to call getIndex() twice(second time async/non-blocking). But that's kind of stupid.

4 Answers

Answers 1

I would say the easiest way of doing what you need is to create a custom Aspect which will do all the magic transparently and which can be reused in more places.

So assuming you have spring-aop and aspectj dependencies on your class path the following aspect will do the trick.

@Aspect @Component public class IndexEvictorAspect {      @Autowired     private Cache cache;      @Autowired     private IndexService indexService;      private final ReentrantLock lock = new ReentrantLock();      @AfterReturning(pointcut="hello.IndexService.getIndex()", returning="index")     public void afterGetIndex(Object index) {         if(indexService.isObsolete((Index) index) && lock.tryLock()){             try {                 Index newIndex = indexService.calculateIndex();                 cache.put(SimpleKey.EMPTY, newIndex);             } finally {                 lock.unlock();             }         }     } } 

Several things to note

  1. As your getIndex() method does not have a parameters it is stored in the cache for key SimpleKey.EMPTY
  2. The code assumes that IndexService is in the hello package.

Answers 2

Something like the following could refresh the cache in the desired way and keep the implementation simple and straightforward.

There is nothing wrong about writing clear and simple code, provided it satisfies the requirements.

@Service public static class IndexService {     @Autowired     private IndexCalculator indexCalculator;      public Index getIndex() {         Index cachedIndex = getCachedIndex();          if (isObsolete(cachedIndex)) {             evictCache();             asyncRefreshCache();         }          return cachedIndex;     }      @Cacheable(cacheNames = "index")     public Index getCachedIndex() {         return indexCalculator.calculateIndex();     }      public void asyncRefreshCache() {         CompletableFuture.runAsync(this::getCachedIndex);     }      @CacheEvict(cacheNames = "index")     public void evictCache() { }      public boolean isObsolete(Index index) {         long indexTimestamp = index.getTimestamp();         long currentTimestamp = indexCalculator.getCurrentTimestamp();          if (index == null || indexTimestamp < currentTimestamp) {             return true;         } else {             return false;         }     } } 

Answers 3

EDIT1:

The caching abstraction based on @Cacheable and @CacheEvict will not work in this case. Those behaviour is following: during @Cacheable call if the value is in cache - return value from the cache, otherwise compute and put into cache and then return; during @CacheEvict the value is removed from the cache, so from this moment there is no value in cache, and thus the first incoming call on @Cacheable will force the recalculation and putting into cache. The use @CacheEvict(condition="") will only do the check on condition wether to remove from cache value during this call based on this condition. So after each invalidation the @Cacheable method will run this heavyweight routine to populate cache.

to have the value beign stored in the cache manager, and updated asynchronously, I would propose to reuse following routine:

@Inject @Qualifier("my-configured-caching") private Cache cache;  private ReentrantLock lock = new ReentrantLock();  public Index getIndex() {     synchronized (this) {         Index storedCache = cache.get("singleKey_Or_AnythingYouWant", Index.class);          if (storedCache == null ) {              this.lock.lock();              storedCache = indexCalculator.calculateIndex();              this.cache.put("singleKey_Or_AnythingYouWant",  storedCache);              this.lock.unlock();          }     }     if (isObsolete(storedCache)) {          if (!lock.isLocked()) {               lock.lock();               this.asyncUpgrade()          }     }     return storedCache; } 

The first construction is sycnhronized, just to block all the upcoming calls to wait until the first call populates cache.

then the system checks wether the cache should be regenerated. if yes, single call for asynchronous update of the value is called, and the current thread is returning the cached value. upcoming call once the cache is in state of recalculation will simply return the most recent value from the cache. and so on.

with solution like this you will be able to reuse huge volumes of memory, of lets say hazelcast cache manager, as well as multiple key-based cache storage and keep your complex logic of cache actualization and eviction.

OR IF you like the @Cacheable annotations, you can do this following way:

@Cacheable(cacheNames = "index", sync = true) public Index getCachedIndex() {     return new Index(); }  @CachePut(cacheNames = "index") public Index putIntoCache() {     return new Index(); }  public Index getIndex() {     Index latestIndex = getCachedIndex();      if (isObsolete(latestIndex)) {         recalculateCache();     }      return latestIndex; }  private ReentrantLock lock = new ReentrantLock();  @Async public void recalculateCache() {     if (!lock.isLocked()) {         lock.lock();         putIntoCache();         lock.unlock();     } } 

Which is almost the same, as above, but reuses spring's Caching annotation abstraction.

ORIGINAL: Why you are trying to resolve this via caching? If this is simple value (not key-based, you can organize your code in simpler manner, keeping in mind that spring service is singleton by default)

Something like that:

@Service public static class IndexService {     @Autowired     private IndexCalculator indexCalculator;      private Index storedCache;      private ReentrantLock lock = new ReentrantLock();      public Index getIndex() {         if (storedCache == null ) {              synchronized (this) {                  this.lock.lock();                  Index result = indexCalculator.calculateIndex();                  this.storedCache = result;                  this.lock.unlock();              }         }         if (isObsolete()) {              if (!lock.isLocked()) {                   lock.lock();                   this.asyncUpgrade()              }         }         return storedCache;     }      @Async     public void asyncUpgrade() {         Index result = indexCalculator.calculateIndex();         synchronized (this) {              this.storedCache = result;         }         this.lock.unlock();     }      public boolean isObsolete() {         long currentTimestamp = indexCalculator.getCurrentTimestamp();         if (storedCache == null || storedCache.getTimestamp() < currentTimestamp) {             return true;         } else {             return false;         }     } } 

i.e. first call is synchronized and you have to wait until the results are populated. Then if stored value is obsolete the system will perform asynchronous update of the value, but the current thread will receive the stored "cached" value.

I had also introduced the reentrant lock to restrict single upgrade of stored index at time.

Answers 4

I would use a Guava LoadingCache in your index service, like shown in the code sample below:

LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()   .maximumSize(1000)   .refreshAfterWrite(1, TimeUnit.MINUTES)   .build(       new CacheLoader<Key, Graph>() {         public Graph load(Key key) { // no checked exception           return getGraphFromDatabase(key);         }         public ListenableFuture<Graph> reload(final Key key, Graph prevGraph) {           if (neverNeedsRefresh(key)) {             return Futures.immediateFuture(prevGraph);           } else {             // asynchronous!             ListenableFutureTask<Graph> task = ListenableFutureTask.create(new Callable<Graph>() {               public Graph call() {                 return getGraphFromDatabase(key);               }             });             executor.execute(task);             return task;           }         }       });

You can create an async reloading cache loader by calling Guava's method:

public abstract class CacheLoader<K, V> {  ...      public static <K, V> CacheLoader<K, V> asyncReloading(        final CacheLoader<K, V> loader, final Executor executor) {        ...            }  }

The trick is to run the reload operation in a separate thread, using a ThreadPoolExecutor for example:

  • On first call, the cache is populated by the load() method, thus it may take some time to answer,
  • On subsequent calls, when the value needs to be refreshed, it's being computed asynchronously while still serving the stale value. It will serve the updated value once the refresh has completed.
Read More

Sunday, February 26, 2017

Cache is not cleared in Google Chrome

Leave a Comment

When I deploy the version I will add the number as query string with the JavaScript and CSS file like following?

'app/source/scripts/project.js?burst=32472938' 

I am using the above to burst the cache in the browser.

But in Firefox, I am getting the latest script that I have modified. But in Chrome, I am not getting the latest script that I have modified. Instead of that I am getting the old one.

But in developer console, I am seeing the burst number which is modified in latest.

5 Answers

Answers 1

According to the Google documentation, the best way to invalidate and reload the file is to add a version number to the file name and not as a query parameter:
'app/source/scripts/project.32472938.js'

Here is a link to the documentation:
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching#invalidating_and_updating_cached_responses

Another way is to use an ETag (validation token):
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching#validating_cached_responses_with_etags

Here is how you would set up an ETag with Nginx:
http://nginx.org/en/docs/http/ngx_http_core_module.html#etag

And lastly, a tutorial about browser caching with Nginx and ETag:
https://www.digitalocean.com/community/tutorials/how-to-implement-browser-caching-with-nginx-s-header-module-on-centos-7#step-2-%14-checking-the-default-behavior

Answers 2

I'm uncertain of whether this still applies these days, but there were some cases in the past where proxies could cause a query-string value to be ignored for caching purposes. There's an article from 2008 that discussed the idea that query-string values weren't ideal for the purpose of breaking caching, and that it was better to revise the filename itself -- so, referencing project_32472938.js instead of using the query-string.

(I've also seen, in places, some discussion of unusual cases where certain clients were not seeing these updates, but it seemed to be inconsistent -- not tied to Chrome, necessarily, but more likely tied to a specific installation of Chrome on a specific machine. I'd certainly recommend checking the site on another computer to see if the issue is repeated there, as you could at least narrow down to whether it's Chrome in general, or your specific install of Chrome that is having problems.)

All that said, it's been quite a while since 2008, and that may not be applicable these days. However, if it continues to be a problem -- and you can't find a solution to the underlying problem -- it at least offers a method use to circumvent it.

Answers 3

I don't think that Chrome actually causes the problem, because it would break almost all web applications (eg: https://www.google.com/search?q=needle)

It could be that your deployment was a bit delayed, eg.

  1. Start install new scripts
  2. Check with Chrome (receives old version on new ID)
  3. Install finishes
  4. You try with Firefox (receives new version)
  5. Chrome still shows old version because it cached the old script with new ID

Or you have a CDN like Azure between your web server and your browser.

With standard settings Azure CDN ignores the query string for the caching hash.

Answers 4

try those meta tags:

<meta http-equiv="cache-control" content="max-age=0" /> <meta http-equiv="cache-control" content="no-cache" /> <meta http-equiv="expires" content="0" /> <meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" /> <meta http-equiv="pragma" content="no-cache" /> 

Answers 5

i not sure , but for try...

google crome always ignore it..

you need to add a '?random.number' or "?date.code" to every link each time a url is pressed on your website. eg if 'myhomepage.html?272772' is stored in the cache, then by generating a new random number eg 'myhomepage.html?2474789', google chrome will be forced to look for a new copy.

Read More

Saturday, February 25, 2017

Reactive Caching of HTTP Service

Leave a Comment

I am using RsJS 5 (5.0.1) to cache in Angular 2. It works well.

The meat of the caching function is:

const observable = Observable.defer(     () => actualFn().do(() => this.console.log('CACHE MISS', cacheKey))   )   .publishReplay(1, this.RECACHE_INTERVAL)   .refCount().take(1)   .do(() => this.console.log('CACHE HIT', cacheKey)); 

The actualFn is the this.http.get('/some/resource').

Like I say, this is working perfectly for me. The cache is returned from the observable for dureation of the RECACHE_INTERVAL. If a request is made after that inverval, the actualFn() will be called.

What I am trying to figure out is when the RECACHE_INTERVAL expires and the actualFn() is called -- how to return the last value. There is a space of time between when the RECACHE_INTERVAL expires and the actualFn() is replayed that the observable doesn't return a value. I would like to get rid of that gap in time and always return the last value.

I could use a side effect and store the last good value call .next(lastValue) while waiting for the HTTP response to return, but this seems naive. I would like to use a "RxJS" way, a pure function solution -- if possible.

3 Answers

Answers 1

Almost any complicated logic quickly goes out of control if you use plain rxjs. I would rather implement custom cache operator from scratch, you can use this gist as an example.

Answers 2

Your example looks exactly the same as an example is SO Documentation on how to make caching with RxJS 5: Caching HTTP responses

If you modify it a little you can simulate the situation that you describe but I don't think it happens as you think:

See this demo: https://jsbin.com/todude/10/edit?js,console

Notice that I'm trying to get cached results at 1200ms when the case is invalidated and then at 1300ms when the previous request is still pending (it takes 200ms). Both results are received as they should.

This happens because when you subscribe and the publishReplay() doesn't contain any valid value it won't emit anything and won't complete immediately (thanks to take(1)) so it needs to subscribe to its source which makes the HTTP requests (this in fact happens in refCount()).

Then the second subscriber won't receive anything as well and will be added to the array of observers in publishReplay(). It won't make another subscription because it's already subscribed to its source (refCount()) and is waiting for response.

So the situation you're describing shouldn't happen I think. Eventually make a demo that demonstrates your problem.

EDIT:

Emitting both invalidated item and fresh items

The following example shows a little different functionality than the linked example. If the cached response is invalidated it'll be emitted anyway and then it receives also the new value. This means the subscriber receives one or two values:

  • 1 value: The cached value
  • 2 values: The invalidated cached value and then new a fresh value that'll be cached from now on.

The code could look like the following:

let counter = 1; const RECACHE_INTERVAL = 1000;  function mockDataFetch() {   return Observable.of(counter++)     .delay(200); }  let source = Observable.defer(() => {   const now = (new Date()).getTime();    return mockDataFetch()     .map(response => {       return {         'timestamp': now,         'response': response,       };     }); });  let updateRequest = source   .publishReplay(1)   .refCount()   .concatMap(value => {     if (value.timestamp + RECACHE_INTERVAL > (new Date()).getTime()) {       return Observable.from([value.response, null]);     } else {       return Observable.of(value.response);     }   })   .takeWhile(value => value);   setTimeout(() => updateRequest.subscribe(val => console.log("Response 0:", val)), 0); setTimeout(() => updateRequest.subscribe(val => console.log("Response 50:", val)), 50); setTimeout(() => updateRequest.subscribe(val => console.log("Response 200:", val)), 200); setTimeout(() => updateRequest.subscribe(val => console.log("Response 1200:", val)), 1200); setTimeout(() => updateRequest.subscribe(val => console.log("Response 1300:", val)), 1300); setTimeout(() => updateRequest.subscribe(val => console.log("Response 1500:", val)), 1500); setTimeout(() => updateRequest.subscribe(val => console.log("Response 3500:", val)), 3500); 

See live demo: https://jsbin.com/ketemi/2/edit?js,console

This prints to console the following output:

Response 0: 1 Response 50: 1 Response 200: 1 Response 1200: 1 Response 1300: 1 Response 1200: 2 Response 1300: 2 Response 1500: 2 Response 3500: 2 Response 3500: 3 

Notice 1200 and 1300 received first the old cached value 1 immediately and then another value with the fresh 2 value.
On the other hand 1500 received only the new value because 2 is already cached and is valid.

The most confusing thing is probably why am I using concatMap().takeWhile(). This is because I need to make sure that the fresh response (not the invalidated) is the last value before sending complete notification and there's probably no operator for that (neither first() nor takeWhile() are applicable for this use-case).

Emitting only the current item without waiting for refresh

Yet another use-case could be when we want to emit only the cached value while not waiting for fresh response from the HTTP request.

let counter = 1; const RECACHE_INTERVAL = 1000;  function mockDataFetch() {   return Observable.of(counter++)     .delay(200); }  let source = Observable.defer(() => {   const now = (new Date()).getTime();    return mockDataFetch()     .map(response => {       return {         'timestamp': now,         'response': response,       };     }); });  let updateRequest = source   .publishReplay(1)   .refCount()   .concatMap((value, i) => {     if (i === 0) {       if (value.timestamp + RECACHE_INTERVAL > (new Date()).getTime()) { // is cached item valid?         return Observable.from([value.response, null]);       } else {         return Observable.of(value.response);       }     }     return Observable.of(null);   })   .takeWhile(value => value);   setTimeout(() => updateRequest.subscribe(val => console.log("Response 0:", val)), 0); setTimeout(() => updateRequest.subscribe(val => console.log("Response 50:", val)), 50); setTimeout(() => updateRequest.subscribe(val => console.log("Response 200:", val)), 200); setTimeout(() => updateRequest.subscribe(val => console.log("Response 1200:", val)), 1200); setTimeout(() => updateRequest.subscribe(val => console.log("Response 1300:", val)), 1300); setTimeout(() => updateRequest.subscribe(val => console.log("Response 1500:", val)), 1500); setTimeout(() => updateRequest.subscribe(val => console.log("Response 3500:", val)), 3500); setTimeout(() => updateRequest.subscribe(val => console.log("Response 3800:", val)), 3800); 

See live demo: https://jsbin.com/kebapu/2/edit?js,console

This example prints to console:

Response 0: 1 Response 50: 1 Response 200: 1 Response 1200: 1 Response 1300: 1 Response 1500: 2 Response 3500: 2 Response 3800: 3 

Notice that both 1200 and 1300 receive value 1 because that's the cached value even though it's invalid now. The first call at 1200 just spawns a new HTTP request without waiting for its response and emits only the cached value. Then at 1500 the fresh value is cached so it's just reemitted. The same applies at 3500 and 3800.

Note, that the subscriber at 1200 will receive the next notification immediately but the complete notification will be sent only after the HTTP request has finished. We need to wait because if we sent complete right after next it'd make the chain to dispose its disposables which should also cancel the HTTP request (which is what we definitely don't want to do).

Answers 3

Updated answer:

If always want to use the previous value while a new request is being made then can put another subject in the chain which keeps the most recent value.

You can then repeat the value so it is possible to tell if it came from the cache or not. The subscriber can then filter out the cached values if they are not interested in those.

// Take values while they pass the predicate, then return one more // i.e also return the first value which returned false const takeWhileInclusive = predicate => src =>   src   .flatMap(v => Observable.from([v, v]))   .takeWhile((v, index) =>      index % 2 === 0 ? true : predicate(v, index)   )   .filter((v, index) => index % 2 !== 1);  // Source observable will still push its values into the subject // even after the subscriber unsubscribes const keepHot = subject => src =>   Observable.create(subscriber => {     src.subscribe(subject);      return subject.subscribe(subscriber);   });  const cachedRequest = request    // Subjects below only store the most recent value    // so make sure most recent is marked as 'fromCache'   .flatMap(v => Observable.from([      {fromCache: false, value: v},      {fromCache: true, value: v}    ]))    // Never complete subject   .concat(Observable.never())    // backup cache while new request is in progress   .let(keepHot(new ReplaySubject(1)))    // main cache with expiry time   .let(keepHot(new ReplaySubject(1, this.RECACHE_INTERVAL)))   .publish()   .refCount()   .let(takeWhileInclusive(v => v.fromCache));    // Cache will be re-filled by request when there is another subscription after RECACHE_INTERVAL   // Subscribers will get the most recent cached value first then an updated value 

https://acutmore.jsbin.com/kekevib/8/edit?js,console

Original answer:

Instead of setting a window size on the replaySubject - you could change the source observable to repeat after a delay.

const observable = Observable.defer(     () => actualFn().do(() => this.console.log('CACHE MISS', cacheKey))   )   .repeatWhen(_ => _.delay(this.RECACHE_INTERVAL))   .publishReplay(1)   .refCount()   .take(1)   .do(() => this.console.log('CACHE HIT', cacheKey)); 

The repeatWhen operator requires RxJs-beta12 or higher https://github.com/ReactiveX/rxjs/blob/master/CHANGELOG.md#500-beta12-2016-09-09

Read More

Monday, February 13, 2017

PHP site not showing cache-control. Not caching anything

Leave a Comment

INTRO

I have a task to fix existing site's problem that nothing is being cached (except for browser session). When closing session and opening browser again, page loads a lot of images, JS and CSS again. As I have ~60 items every time, there is a big load problem.

PROBLEM

Looking at Chrome console, Audit shows The following resources are missing a cache expiration... Audit

And in Network item in "Response Headers" doesn't even show "cache-control" line. Response Head

TRIED SOLUTIONS

I have set info in .htaccess file and made sure mod_expires is active:

<IfModule mod_expires.c>     ExpiresActive On     ExpiresByType image/jpg "access 1 year"     ExpiresByType image/jpeg "access 1 year"     ExpiresByType image/gif "access 1 year"     ExpiresByType image/png "access 1 year"     ExpiresByType text/css "access 1 month"     ExpiresByType text/html "access 1 month"     ExpiresByType application/pdf "access 1 month"     ExpiresByType text/x-javascript "access 1 month"     ExpiresByType application/x-shockwave-flash "access 1 month"     ExpiresByType image/x-icon "access 1 year"     ExpiresDefault "access 1 month" </IfModule> 

I added Cache-control meta-tag in html head that is also showing in page's code source so it is compiled.

<meta http-equiv="Cache-control" content="public" content="max-age=604800"> 

And I'd like to add that it most likely isn't a server issue as production page's host has set it to a usual default. (And I don't have access to that server anyways)
I'd be super delighted, if someone could give me some pointers of what I am missing or haven't checked or simply don't understand.

Added main.css headers enter image description here

Thanks!

2 Answers

Answers 1

You can set the headers through php since this is a php site.

<?php   header("Cache-Control: max-age=2592000"); //30days (60sec * 60min * 24hours * 30days) ?> 

Also you can use the FilesMatch like this in your .htaccess

<FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$">     Header set Cache-Control "max-age=31536000, public" </FilesMatch> 

Answers 2

Well, although stupid (as I expected), but I didn't read about it anywhere and just forget about the need of it.

Solution

It turned out all those things changed (as I said everything was activated on server, access files etc).
And the problem was that I didn't clear the cache after changing caching info. Now after 3 days I started working on some css, needed to reset the cache and boom - all the new headers are active for all the items.
As I said - stupid. (Please don't downvote for this triviality - nobody figured out that it could be a problem).

Read More