Showing posts with label couchdb. Show all posts
Showing posts with label couchdb. Show all posts

Tuesday, August 14, 2018

pouchdb db.login is not a function

Leave a Comment

Tried using these imports

import PouchDB from 'pouchdb'; import PouchDBAuth from 'pouchdb-authentication';  PouchDB.plugin(PouchDBAuth) 

Module ''pouchdb-authentication'' has no default export is the error generated while using these imports.

PouchDB.plugin(require('pouchdb-authentication')); 

Using require is removing the error but still showing db.login() is not a function.Can anyone suggest where the issue is?

1 Answers

Answers 1

Well I found why it was not working in my situation, I was using this :

import '*' as PouchDBAuthentication from 'pouchdb-authentication'; 

instead of

import PouchDBAuthentication from 'pouchdb-authentication'; 

So the right way is

import PouchDBAuthentication from 'pouchdb-authentication'; import PouchDB from 'pouchdb';  PouchDB.plugin(PouchDBAuthentication); 

In the second hand the following steps should be done : https://github.com/pouchdb-community/pouchdb-authentication/issues/211

Read More

Thursday, May 10, 2018

CouchDB won't work over SSL

Leave a Comment

I am running the CouchDB Docker container, V.2.1.1. Everything is working at this point except for SSL. I am following the CouchDB documentation on SSL setup. The container has OpenSSL 1.0.1t.

As shown in the documentation, I am using a self-signed certificate. When I try to connect to the SSL page on port 6984:

Chrome tells me

"ERR_CONNECTION_CLOSED". 

curl gives me

curl -k https://localhost:6984

curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:6984 

In the server log, I get a whole lot of this.

hello terminated with reason: no function clause matching ssl_cipher:hash_algorithm 

A search on this last error turns up information indicating that the Erlang version has an issue. However, I believe the CouchDB container has an already patched version. I did try and upgrade with:

apt-get install Erlang 

This made no difference. Search results also point to the version of OpenSSL having a problem. I upgraded to OpenSSL 1.1.1 from source, Recreated the certificates, and still, the issue persists.

As requested, here is the output from a few more commands.

openssl s_client -connect localhost:6984

CONNECTED(00000005) 140736008328136:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/BuildRoot/Library/Caches/com.apple.xbs/Sources/libressl/libressl-22.50.2/libressl/ssl/s23_lib.c:124: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 318 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated --- 

curl --version

curl 7.54.0 (x86_64-apple-darwin17.0) libcurl/7.54.0 LibreSSL/2.0.20 zlib/1.2.11 nghttp2/1.24.0 Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp Features: AsynchDNS IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz HTTP2 UnixSockets HTTPS-proxy 

curl -k -v https://localhost:6984

* Rebuilt URL to: https://localhost:6984/ *   Trying ::1... * TCP_NODELAY set * Connected to localhost (::1) port 6984 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: *   CAfile: /etc/ssl/cert.pem   CApath: none * TLSv1.2 (OUT), TLS handshake, Client hello (1): * LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:6984 * stopped the pause stream! * Closing connection 0 curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:6984 

curl -k --ciphers DEFAULT https://localhost:6984

curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:6984 

curl -k --ciphers ECDHE-RSA-AES256-GCM-SHA384 https://localhost:6984

curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:6984 

The output from the following three commands is very similar. I will just show the differences. However, it seems that a handshake is now taking place with all of these commands.

$ openssl s_client -tls1 -connect localhost:6984

CONNECTED(00000005) SSL handshake has read 1762 bytes and written 400 bytes New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Compression: NONE Expansion: NONE SSL-Session:     Protocol  : TLSv1     Cipher    : DHE-RSA-AES256-SHA     Session-ID: 18C5DF9DCA1B8AA0DBD33258BCD253053F8D1D91B524B0561A1C0FAB8CFB5146     Master-Key: FD0C57E4E8FB992C0323D43930C104D82B69C4200F42E03EDB51E38A47448D62FDCB6E813583E2177A339B74B4D0CC4A     Start Time: 1525593658     Timeout   : 7200 (sec) 

$ path/to/brew/version/of/openssl s_client -connect localhost:6984

CONNECTED(00000003) Peer signing digest: SHA512 Server Temp Key: DH, 1024 bits SSL handshake has read 1796 bytes and written 537 bytes New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA256 SSL-Session:     Protocol  : TLSv1.2     Cipher    : DHE-RSA-AES256-SHA256     Session-ID: A19D67CBE634843181859DB2C3C4D1A3416C9F7DAA85CF470D412FE723AD49B4     Master-Key: 61B711B9BEDB651868607527439D01B421780C7D584FCE68C4754A7A7F3563923409C03F4B68BB7914397B48A92FC756     Key-Arg   : None     PSK identity: None     PSK identity hint: None     SRP username: None     Start Time: 1525593604     Timeout   : 300 (sec) 

$ path/to/brew/version/of/openssl s_client -tls1 -connect localhost:6984

SSL handshake has read 1762 bytes and written 397 bytes New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA SSL-Session:     Protocol  : TLSv1     Cipher    : DHE-RSA-AES256-SHA     Session-ID: 6CC7FFE1C7CE258F105C7ADD5D8A9C0DFFB26A5A9555EB218EE48E519D361208     Master-Key: 2D6DFAC01544F6FF5F4138D877A4105485D5A2F77B58B4796822625E2E602455C38E3EEB2CBACE07FA03D207B07C715E     Start Time: 1525593717     Timeout   : 7200 (sec) 

$ curl -k --tlsv1 https://localhost:6984

curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:6984 

$ curl -k --tlsv1.0 https://localhost:6984

{"couchdb":"Welcome","version":"2.1.1","features":["scheduler"],"vendor":{"name":"The Apache Software Foundation"}} 

So I am guessing there is a problem with the built-in version of LibreSSL? The next question is what can be done about it?

2 Answers

Answers 1

If your SSL certificate is self-signed:


You didn't show your curl command, but I guess you are not using the -k option, but you should:

-k, --insecure               (TLS) By default, every SSL connection curl makes is verified to be secure. This option allows curl to proceed and operate even  for               server connections otherwise considered insecure. 

Answers 2

In order to dig deeper, can you post the output of the following commands?

$ openssl s_client -connect localhost:6984  $ curl --version  $ curl -k -v https://localhost:6984  $ curl -k --ciphers DEFAULT https://localhost:6984  $ curl -k --ciphers ECDHE-RSA-AES256-GCM-SHA384 https://localhost:6984 

By the way, I notice that your curl is using LibreSSL not OpenSSL as indicated in the error message you're getting:

curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:6984


When you try openssl:

$ openssl s_client -connect localhost:6984 

You are getting this error:

CONNECTED(00000005) 140736008328136:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/BuildRoot/Library/Caches/com.apple.xbs/Sources/libressl/libressl-22.50.2/libressl/ssl/s23_lib.c:124:

Can you please report the output of this command:

$ openssl s_client -tls1 -connect localhost:6984 

Also, it might be inferred that the cause of the problem is your macOS default version of LibreSSL/OpenSSL. To fix the problem, try to install the brew version OpenSSL and run this command again, and please report the output:

$ path/to/brew/version/of/openssl s_client -connect localhost:6984 

Also please post the output of this too:

$ path/to/brew/version/of/openssl s_client -tls1 -connect localhost:6984 

Based on your reported outputs, please try the following command and see if it works:

$ curl -k --tlsv1 https://localhost:6984 
Read More

Friday, April 27, 2018

System freezes when parent and child connectivity

Leave a Comment

I have used Pouchdb with Electron, connected two system through local LAN and using the single database which is a parent and another child system will be using the same pouchdb database.

used express-pouch to make child system to connect parent pouchdb.

There is background sync will be running in parent system from couchDB to pouchDB.

The connection is working as expected, Problem will be after some time of connection system freezes in both parent and child. checked task manager disk 100%, memory 100%.

Both System - windows 10 pro - Ram - 4GB

1 Answers

Answers 1

Try to increase memory allocated for your script with the option --max_old_space_size=<size>.

Try too to monitor number of emitter you have with method emitter.getMaxListeners() and emitter.listenerCount(eventName), and increase the number with emitter.setMaxListener(n)

Read More

Saturday, October 21, 2017

Which nosql option relative to stored procedures and large arrays?

Leave a Comment

I have a use case for a nosql data store but I don't know which one to use:

Each document in my data store has a key for _id and another key as an array of objects. Each object hash element of this array has a key for _elementid and another for color.

I want my server proxy to send an update request to the data store with a substring used as regex that qualifies all documents whose _id matches the regex. I then want to push an element onto the array of each document of this output. This new element will have the same color for each unshift but the _elementid will be unique for each.

Is there a nosql option out there that offers this kind of stored procedure? Does it have limits on the length of the array?

* EDIT *

(1) DOCUMENT A:

{     _id : "this_is-an-example_10982029822",     dataList : [         {             _elementid : "999999283902830",             color : "blue",          }, {             _elementid : "99999273682763",             color : "red"         }     ] }  DOCUMENT B:   {     _id : "this_is-an-example_209382093820",     dataList : [         {             _elementid : "99999182681762",             color : "yellow"         }     ] } 

(2) EXAMPLE OF UPDATE REQUEST

(let [regex_ready_array   ["this_is-an-example" "fetcher" "finder"]       fetch_query_regex   (str "^" (clojure.string/join "|^" regex_ready_array))       element_template    {                                 :_elementid { (rand-int 1000000000000000) }                                 :color      "green"                           }       updated_sister_objs (mc/bulk-update connection "arrayStore" {:_id {$regex fetch_query_regex }} "unshift" element_template)]) 

(3) DOCUMENT A:

{     _id : "this_is-an-example_10982029822",     dataList : [         {             _elementid : "999999146514612",             color : "green",          }, {             _elementid : "999999283902830",             color : "blue",          }, {             _elementid : "99999273682763",             color : "red"         }     ] }  DOCUMENT B:   {     _id : "this_is-an-example_209382093820",     dataList : [         {             _elementid : "9999997298729873",             color : "green",          }, {             _elementid : "9999918262881762",             color : "yellow"         }     ] } 

* EDIT 2 *

(1) the dataList array could be large (large enough that MongoDB's 16mb document size limit would present an issue);

(2) the _elementid values to be assigned to the additional dataList elements will be different for each new element and the the store will auto assign these as random number values

(3) a single update request should apply all updates, rather than one update per additional element;

(4) the OP is looking for a compare-and-contrast between several 'nosql solutions' which MongoDB, Cassandra, Redis and CouchDB being suggested as possible candidates.

1 Answers

Answers 1

By Seeing your question. I understand you are using JSONs and Clojure.

Lets see which are good NoSQL for JSONs. Quick overview of populor NoSQL

  1. Apache Cassandra : Data Model in Cassandra is essentially a hybrid between a key-value and a column-oriented (or tabular) database management system. Its data model is a partitioned row store with consistency.

  2. Redis: Redis maps keys to types of values.It has some abstract datatypes other than string like List, Sets, Sorted Sets, Hash Tables, Geospatial data.

  3. Apache CouchDB : CouchDB manages a collection of JSON documents.

  4. MongoDB : CouchDB manages a collection of BSON documents. BSON is Binary JSON http://bsonspec.org/spec.html.

If you are using lots of JSON payload you could use MongoDB or Apache CouchDB. But you want to update JSONs based on REGEX.

Lets check REGEX capability of CouchDB and MongoDB

  • It can be done easily with MAP Reduce in Both CouchDB and MongoDB

    Regex Select: db.student.find( { f_name: { $regex: 'this_is-an-example.*'} } ).pretty();

  • MongoDB: In mongodb we have regex operations. I have tried it and it works fine.

Reference

  1. https://docs.mongodb.com/manual/reference/operator/query/regex/

  2. mongoDB update statement using regex

  3. https://www.w3resource.com/mongodb/mongodb-regex-operators.php

    • CouchDB: I haven't tried CouchDB with Regex but as far I know it is possible. Regex function is available as per CouchDB documentation.

    { "selector": { "afieldname": {"$regex": "^A"} } }

Reference

  1. http://docs.couchdb.org/en/2.0.0/api/database/find.html
  2. Temporary couchdb view of documents with doc_id matching regular expression

You could you either of this MongoDB and CouchDB. Lots of resources are avalible for MongoDB.

Read More

Saturday, August 26, 2017

how can I generate a partially unique and sequential field in a couchdb document?

Leave a Comment

I'm very new to couchdb and I'm wondering how I can create ID's that look like.

Employee:DBX-**0001**-SP

the number portion 0001 must be unique and sequential. How can I achieve something like this in couchdb? I searched all over and I cannot find any simple solution.

It would be best if I can generate the sequential portion in couchdb and not on the client side to avoid collisions during replication.

The current solution I have is that I fetch a document I've stored that looks like this {"_id": "EmployeeAutoIncrement", value: 1} upon retrieval I increment the value and send it back to the server if those are successful then I return the new incremented value and use it as my Auto Increment Value to be part of the ID Employee:DBX-AUTO_INCREMENT_VALUE_HERE-SP

The issue with this is that if two people make a request to the EmployeeAutoIncrement at the same time and they both update it will it not cause conflicts? Also, if one person makes a request and they go offline then they come back online then wouldn't that also make a conflict?

2 Answers

Answers 1

All of the requirements cannot be satisfied client-side when using multiple clients, some of which might be off-line.

Here is a process that results in a monotonically-increasing id:

  1. Each client saves a record with a unique id. The record should include a flag marking the record as temporary.
  2. Build an external process that listens to the changes feed for records marked as temporary. The changes feed outputs records in "time order of application".
  3. The external process should create a new record with the correct id, flagging it as permanent. Since only that process creates "permanent" records, it can read and write the EmployeeAutoIncrement value without collisions.
  4. The external process can then delete the temporary record.

The database will have double the number of records, so it will grow more quickly and need to be compacted sooner if space is an issue. Any views/queries on the employee records will need to check for the permanent flag, in case a query runs while the external process is adding a new record.

Answers 2

It can (sort of) be done, although I recommend you think about your design choices and why you want this done on a distributed database -- it's probably better done on the client where you can control the serialization to your sequence generator.

If you want to do it at least partially on the server, you will need an implementation of a so-called CRDT counter, as outlined in the following paper:

http://hal.upmc.fr/docs/00/55/55/88/PDF/techreport.pdf

You can find a Ruby implementation of some of those ideas here:

https://github.com/aphyr/meangirls

and a simple Couch-specific implementation of a counter (the one you need) and a set here:

https://github.com/drsm79/couch-crdt

The latter, whilst written in Python, will do almost exactly what you want, if you follow the pattern as shown in the following example:

https://github.com/drsm79/couch-crdt/blob/master/examples/counter.py

which will give you your monotonic sequence. From there, create your document _id.

Translation to JavaScript and PouchDB left as an exercise for the reader.

Read More

Wednesday, August 16, 2017

Best practice for multiple organisations on couch

Leave a Comment

So I have a node express app using nano with couchdb as the backend, this is running fine. I'm now looking to learn how I would expand it to multiple organisations.

So for instance, a wildcard DNS record allowing https://customername.myapp.com for each customer. I will then check the req.headers.host in the main database, along with checking session cookies etc in each request.

What I'm struggling to get my head around though, is how the backend will work. I think I understand that the correct method is to use a database for each organisation, and copy the design from a template database.

But if this is correct, I don't understand how this translates to my code using nano. I currently use this:

var dbname = 'customer1'; var nano = require('nano')(config.dbhost); var couch = nano.db.use(dbname); 

and then in my functions:

couch.get(somevalue, function(err, body) {     // do stuff }); 

But that won't work when the database itself is a variable. Should I be looking at moving the query to a lower level, eg nano.get('dbname', query... or something else?

EDIT

Hoping someone can give me an example of how to use middleware to change the database name dependent on the host header. I have this so far:

app.use(function(req,res,next) {     var couch = nano.db.use(req.header.host);     next(); }); 

But I don't understand how to pass the couch object through ('couch' is unknown in the rest of my routing). I have tried passing it back through in the 'next(couch)' but this breaks it...

1 Answers

Answers 1

First of all, I'd recommend to have the application working with a single organization. If you want to have 1 database per organization, it should be fairly easy to add more organizations later.

I would have a master database and a template database. The master database would be a database listing the existing organization in the service with some metadata. This is what NodeJS would query first to know from which database you need to fetch data.

The template database would be used to sync design objects to existing or new organizations. You can technically have old organization with old design and they will still work as the data will be consistent.

In your case, the line you're looking for is this one:

var couch = nano.db.use(dbname); 

When you know which database to query, you'll have to create a new nano object for each dbname you need.

You can know which database to use directly if the databases are named after domain name or project name as long as the information is present in the request headers/session.

Anyhow, it's a really wide question that can be answered in many ways and there is no particularly best way of doing things.

You could technically have all of your organization in one database if that works for you. Splitting database allow you to isolate a bit things and make use of ACL but you could technically make database not only for organization but for more specific things.

For example, I made a painting program that stores projects per database and allow people to cooperatively draw on a canvas. Database ACL allowed me to restrict access to people invited in a project. My NodeJS server was techically used only for WebSockets and the webapp was able to communicate with the couchDB directly without NodeJS.

Read More

Wednesday, February 8, 2017

Best approach on handling large dataset for offline-first Mobile Apps (PouchDB)

Leave a Comment

So I'm using Ionic v2 and using Pouch for mobile development using sqlite. Data coming from a REST API which contains something like this:

{   "record-id": "2332255",   "record-name": "record-ABC-XTY",   "record-items": [     {       "item-id": "456454",       "item-name": "item-XADD",       "category": "Cat1",       "subcategory": "Subcat1",       "location": "LocationXYZ",       "owner": "Person1",       "data-rows": [         {           "row-name": "sampleRowName1",           "row-value": "ABC-XASS"         },         {           "row-name": "sampleRowName2",           "row-value": "ABC-XASS"         }       ]     },     {       "item-id": "654645",       "item-name": "item-BNSSA",       "category": "Cat2",       "subcategory": "Subcat2",       "location": "LocationABC",       "owner": "Person2",       "data-rows": [         {           "row-name": "sampleRowName1",           "row-value": "ABC-XASS"         },         {           "row-name": "sampleRowName2",           "row-value": "ABC-XASS"         }       ]     }   ] } 

Now as you can see, the record-items could contain 100,000 items or more (est json size: 32mb). Right now I'm lost on which approach should I take. Optimized data handling is crucial and I don't know what PouchDB approach is better. Here are some of my thoughts.

  1. Save the whole JSON data as one entry for PouchDB. But I'm worried that it will take up a large memory when retrieved and will make that application slow.
  2. Chunk the record-items by one pouch entry record and retrieve it individually. I'm not sure if this is better in terms of overall performance but PouchDB record will probably be larger (?).

Also, there will be sorting, fetching all data (only the _ids and few fields just to show a list of all results) and searching.

2 Answers

Answers 1

We have a similar app that works in offline mode and stores the data locally using sqlite but the data we deal with may not be that huge. For us the data is downloaded as xml file from web service; the xml have attributes row, column, value, name etc. The app serializes the data and converts it into objects which are then inserted into sqlite (using "InsertAll"/"UpdateAll" the insert or update for items is quite fast). These xml's are loaded into UI and user can update "value" tags from UI.
Search is optimized by giving user filters so that the query is run on smaller data.

For your case I can think of 3 tables that you can use:-

1) Records (Fields:-RecordID, RecordName) 2) Items (Fields:- ItemID (PK), RecordID (FK), ItemName etc) 3) Rows (Fields:-ItemID (FK), RowName, RowValue)

After geting data from REST you can serialize the data and insert it into respective tables concurrently. Try giving users filters when it comes to search so that actual data set is smaller.

Hope it helps!

Answers 2

Your basic decision is whether to embed the data or reference it. Here are some general rules for deciding:

Embed when:

  • Data typically queried together (example: user profile)
  • Child depends on parent
  • One-to-one relationship
  • One-to-few relationship
  • Changes occur at a similar rate

Reference when:

  • Unbounded one-to-many relationship exists
  • Many-to-many relationship
  • Same data repeated in many places
  • Data changes at different rates

You're correct that if you store everything as one record you may have problems with the size. The extra storage caused by splitting it up should be inconsequential.

You'll be using views to create indexes, which then feed into your queries. How you do that will probably dominate the efficiency.

Read More

Sunday, April 17, 2016

Authenticating a CouchDB user against an external server

Leave a Comment

I have the following setup:

  1. CouchDB database that stores users and handles authentication; creates one db per user
  2. Web app that uses PouchDB to sync and authenticate via pouchdb-authentication
  3. A REST API server that gets requests from the web app and accesses CouchDB

Now, the REST API has admin access to CouchDB, so when it receives requests, it needs to do some form of authentication to make sure the sender has permissions to the database he claims to have access to. Since I use persistent sessions, the web app does not know the user password at all times (unless I store it in localstorage - obviously a bad idea). The session cookie is HttpOnly, so I can't access it.

What would be the best way to authenticate requests to the API under this scenario?

0 Answers

Read More