So I'm using Ionic v2 and using Pouch for mobile development using sqlite. Data coming from a REST API which contains something like this:
{ "record-id": "2332255", "record-name": "record-ABC-XTY", "record-items": [ { "item-id": "456454", "item-name": "item-XADD", "category": "Cat1", "subcategory": "Subcat1", "location": "LocationXYZ", "owner": "Person1", "data-rows": [ { "row-name": "sampleRowName1", "row-value": "ABC-XASS" }, { "row-name": "sampleRowName2", "row-value": "ABC-XASS" } ] }, { "item-id": "654645", "item-name": "item-BNSSA", "category": "Cat2", "subcategory": "Subcat2", "location": "LocationABC", "owner": "Person2", "data-rows": [ { "row-name": "sampleRowName1", "row-value": "ABC-XASS" }, { "row-name": "sampleRowName2", "row-value": "ABC-XASS" } ] } ] }
Now as you can see, the record-items could contain 100,000 items or more (est json size: 32mb). Right now I'm lost on which approach should I take. Optimized data handling is crucial and I don't know what PouchDB approach is better. Here are some of my thoughts.
- Save the whole JSON data as one entry for PouchDB. But I'm worried that it will take up a large memory when retrieved and will make that application slow.
- Chunk the record-items by one pouch entry record and retrieve it individually. I'm not sure if this is better in terms of overall performance but PouchDB record will probably be larger (?).
Also, there will be sorting, fetching all data (only the _ids and few fields just to show a list of all results) and searching.
2 Answers
Answers 1
We have a similar app that works in offline mode and stores the data locally using sqlite but the data we deal with may not be that huge. For us the data is downloaded as xml file from web service; the xml have attributes row, column, value, name etc. The app serializes the data and converts it into objects which are then inserted into sqlite (using "InsertAll"/"UpdateAll" the insert or update for items is quite fast). These xml's are loaded into UI and user can update "value" tags from UI.
Search is optimized by giving user filters so that the query is run on smaller data.
For your case I can think of 3 tables that you can use:-
1) Records (Fields:-RecordID, RecordName) 2) Items (Fields:- ItemID (PK), RecordID (FK), ItemName etc) 3) Rows (Fields:-ItemID (FK), RowName, RowValue)
After geting data from REST you can serialize the data and insert it into respective tables concurrently. Try giving users filters when it comes to search so that actual data set is smaller.
Hope it helps!
Answers 2
Your basic decision is whether to embed the data or reference it. Here are some general rules for deciding:
Embed when:
- Data typically queried together (example: user profile)
- Child depends on parent
- One-to-one relationship
- One-to-few relationship
- Changes occur at a similar rate
Reference when:
- Unbounded one-to-many relationship exists
- Many-to-many relationship
- Same data repeated in many places
- Data changes at different rates
You're correct that if you store everything as one record you may have problems with the size. The extra storage caused by splitting it up should be inconsequential.
You'll be using views to create indexes, which then feed into your queries. How you do that will probably dominate the efficiency.
0 comments:
Post a Comment