Monday, August 6, 2018

MongoDB CursorNotFound Error on collection.find() for a few hundred small records

Leave a Comment

I'm running on Mongo 3.6.6 (on a small Mongo Atlas cluster, not sharded) using the native Node JS driver (v. 3.0.10)

My code looks like this:

const records = await collection.find({   userId: ObjectId(userId),   status: 'completed',   lastUpdated: {     $exists: true,     $gte: '2018-06-10T21:24:12.000Z'   } }).toArray(); 

I'm seeing this error occasionally:

{   "name": "MongoError",   "message": "cursor id 16621292331349 not found",   "ok": 0,   "errmsg": "cursor id 16621292331349 not found",   "code": 43,   "codeName": "CursorNotFound",   "operationTime": "6581469650867978275",   "$clusterTime": {     "clusterTime": "6581469650867978275",     "signature": {       "hash": "aWuGeAxOib4XWr1AOoowQL8yBmQ=",       "keyId": "6547661618229018626"     }   } } 

This is happening for queries that return a few hundred records at most. The records are a few hundred bytes each.

I looked online for what the issue might be but most of what I found is talking about cursor timeouts for very large operations that take longer than 10 minutes to complete. I can't tell exactly how long the failed queries took from my logs, but it's at most two seconds (probably much, much shorter than that).

I tested running the query with the same values as one that errored out and the execution time from explain was just a few milliseconds:

"executionStats" : {     "executionSuccess" : true,      "nReturned" : NumberInt(248),      "executionTimeMillis" : NumberInt(3),      "totalKeysExamined" : NumberInt(741),      "totalDocsExamined" : NumberInt(741),      "executionStages" : {...}     },      "allPlansExecution" : []     ] }  

Any ideas? Could intermittent network latency cause this error? How would I mitigate that? Thanks

2 Answers

Answers 1

You can try these 3 things:


a) Set the cursor to false

db.collection.find().noCursorTimeout(); 

You must close the cursor at some point with cursor.close();


b) Or reduce the batch size

db.inventory.find().batchSize(10); 

c) Retry when the cursor expires:

let processed = 0; let updated = 0;  while(true) {     const cursor = db.snapshots.find().sort({ _id: 1 }).skip(processed);      try {         while (cursor.hasNext()) {             const doc = cursor.next();              ++processed;              if (doc.stream && doc.roundedDate && !doc.sid) {                 db.snapshots.update({                     _id: doc._id                 }, { $set: {                     sid: `${ doc.stream.valueOf() }-${ doc.roundedDate }`                 }});                  ++updated;             }          }          break; // Done processing all, exit outer loop     } catch (err) {         if (err.code !== 43) {             // Something else than a timeout went wrong. Abort loop.              throw err;         }     } } 

Answers 2

First of all, if your data is too big it's not a good idea to use toArray() method, instead it's better to use forEach() and loop throw the data. Just like this :

const records = await collection.find({   userId: ObjectId(userId),   status: 'completed',   lastUpdated: {     $exists: true,     $gte: '2018-06-10T21:24:12.000Z'   } });  records.forEach((record) => {     //do somthing ... }); 

Second, you can use {allowDiskUse: true} option for getting large data.

const records = await collection.find({   userId: ObjectId(userId),   status: 'completed',   lastUpdated: {     $exists: true,     $gte: '2018-06-10T21:24:12.000Z'   } }, {allowDiskUse: true}); 
If You Enjoyed This, Take 5 Seconds To Share It

0 comments:

Post a Comment