Showing posts with label architecture. Show all posts
Showing posts with label architecture. Show all posts

Friday, August 18, 2017

Android Architecture Components network threads

Leave a Comment

I'm currently checking out the following guide: https://developer.android.com/topic/libraries/architecture/guide.html

The networkBoundResource class:

// ResultType: Type for the Resource data // RequestType: Type for the API response public abstract class NetworkBoundResource<ResultType, RequestType> {     // Called to save the result of the API response into the database     @WorkerThread     protected abstract void saveCallResult(@NonNull RequestType item);      // Called with the data in the database to decide whether it should be     // fetched from the network.     @MainThread     protected abstract boolean shouldFetch(@Nullable ResultType data);      // Called to get the cached data from the database     @NonNull @MainThread     protected abstract LiveData<ResultType> loadFromDb();      // Called to create the API call.     @NonNull @MainThread     protected abstract LiveData<ApiResponse<RequestType>> createCall();      // Called when the fetch fails. The child class may want to reset components     // like rate limiter.     @MainThread     protected void onFetchFailed() {     }      // returns a LiveData that represents the resource     public final LiveData<Resource<ResultType>> getAsLiveData() {         return result;     } } 

I'm a bit confused here about the use of threads.
Why is @MainThread applied here for networkIO?
Also, for saving into the db, @WorkerThread is applied, whereas @MainThread for retrieving results.

Is it bad practise to use a worker thread by default for NetworkIO and local db interaction?

I'm also checking out the following demo (GithubBrowserSample): https://github.com/googlesamples/android-architecture-components
This confuses me from a threading point of view.
The demo uses executors framework, and defines a fixed pool with 3 threads for networkIO, however in the demo only a worker task is defined for one call, i.e. the FetchNextSearchPageTask. All other network requests seem to be executed on the main thread.

Can someone clarify the rationale?

1 Answers

Answers 1

It seems you have a few misconceptions.

Generally it is never OK to call network from the Main (UI) thread but unless you have a lot of data it might be OK to fetch data from DB in the Main thread. And this is what Google example does.

1.

The demo uses executors framework, and defines a fixed pool with 3 threads for networkIO, however in the demo only a worker task is defined for one call, i.e. the FetchNextSearchPageTask.

First of all, since Java 8 you can create simple implementation of some interfaces (so called "functional interfaces") using lambda syntax. This is what happens in the NetworkBoundResource:

            appExecutors.diskIO().execute(() -> {                 saveCallResult(processResponse(response));                 appExecutors.mainThread().execute(() ->                         // we specially request a new live data,                         // otherwise we will get immediately last cached value,                         // which may not be updated with latest results received from network.                         result.addSource(loadFromDb(),                                 newData -> result.setValue(Resource.success(newData)))                 );             }); 

at first task (processResponse and saveCallResult) is scheduled on a thread provided by the diskIO Executor and then from that thread the rest of the work is scheduled back to the Main thread.

2.

Why is @MainThread applied here for networkIO?

and

All other network requests seem to be executed on the main thread.

This is not so. Only result wrapper i.e. LiveData<ApiResponse<RequestType>> is created on the main thread. The network request is done on a different thread. This is not easy to see because Retrofit library is used to do all the network-related heavy lifting and it nicely hides such implementation details. Still, if you look at the LiveDataCallAdapter that wraps Retrofit into a LiveData, you can see that Call.enqueue is used which is actually an asynchronous call (scheduled internally by Retrofit).

Actually if not for "pagination" feature, the example would not need networkIO Executor at all. "Pagination" is a complicated feature and thus it is implemented using explicit FetchNextSearchPageTask and this is a place where I think Google example is done not very well: FetchNextSearchPageTask doesn't re-use request parsing logic (i.e. processResponse) from RepoRepository but just assumes that it is trivial (which it is now, but who knows about the future...). Also there is no scheduling of the merging job onto the diskIO Executor which is also inconsistent with the rest of the response processing.

Read More

Wednesday, July 19, 2017

What is the best approach to upload 1000+ records to a server that also contains images for each record from an iOS/Android app?

Leave a Comment

I have a app working offline. It is assumed that 1000+ records are created with images in each record during this period and whenever connectivity is established. What should be the approach to send all the 1000+ records to server that also handles any interruption between the network calls or API failure response.

I assume I have to send records in batches but how to handle the interruption and maintain consistency and prevent any kind of data loss.

7 Answers

Answers 1

I guess the best way here is to send each record separetely (if they are not related to each other).

If you have media attachments, sending of each record will take 2 seconds in average, if you uploading via mobile internet with speed ~2 MB/s. If you will send the large batch of records via each request, you must have stable connection for a long period.

You can send each record as multipart request, where parts are record's body and media attachments.

Also you have no need to check for internet connection, or use receiver for catching changes of connection state. You can simply use this libraries for triggering sync requests:

  1. JobScheduler
  2. Firebase JobDispatcher
  3. Evernote android-job

Answers 2

I would suggest to use Firebase database API. It has got nice offline/online/sync implementations.

https://firebase.google.com/docs/database/

And it is possible to read/write the data using Admin SDK for your NodeJS server:

https://firebase.google.com/docs/admin/setup

Answers 3

Save your records in local Db and use ORMs for it. Use Retrofit which provide onSuccess and onFailure method for Webservice calling. To send data to server at regular interval you can use sync adapter.

Answers 4

  • 1st I need to know how did you save image in local db ?
  • You need to create a service to catch connection status. Each time when connection is established, you submit your record as Multipart kind. You can you Retrofit/Asynctask.
  • Just submit 1 record per one Retrofit/Asynctask, it makes you ez to handle success/fail of each record.
  • You can run a single or multi retrofit/asynctask to submit one or more record, it's up to you.
  • If ur data has image, on server side, you have to handle process from ur server to 3rd server ( server to save image ).

Answers 5

This is a very broad question and it relates to Architecture, UI Experience, limitations, etc.

It seems to be a synchronization pattern where the user can interact with the data locally and offline but at some point, you'd need to synchronize the local data with server-side and vice-versa.

I believe the best place to start is with a background service (Android, not sure if there's a similar approach on iOS). Essentially, regardless of whether the Android app is running or not, the service must handle all the synchronization, interruption, and failure in the background.

If it's a local db, then you'd need to manage opening and closing the database appropriately and I'd suggest using a field to mark any sync'd records so if some records did fail, you can retry them at another point. Also, you can convert the records to json array, then do a post request. As for uploading images, definitely needs to be in batch if there's a lot of them but also making sure to keep track of which ones are uploaded and which ones aren't.

The one problem that you will run into if you're supporting synchronization from different devices and platforms, is you'll have conflicting data being synchronized against the backend. You'll need to handle this case otherwise, it could be very messy and most likely cause a lot of weird issues.

Hope this helps on a high level :)

Answers 6

To take on simple approach ,have 1 flag in your data objects [NSManagedObject] classes as sync.While creating new object / modifying an existing object change sync flag to false .

Filter data objects with sync value as false.

let unsyncedFilter = NSPredicate(format: "sync = %@", @(false)) 

Now you will have an array of objects which you want to sync with server.If you are sending objects one by one in requests. On success change sync flag to true else whenever your function gets executed again on app launch/reachability status update, it will filter out unsynced data again & start synch.

Answers 7

You can use divide and conquer approach means divide the task into small task and upload the data to the server. 1. take a boolean flag "isFinishData" starting with false. 2. starting upload the data on server from 0 to 100 records. 3. next record send from 100 to 200. 4. this process run until last record (1000) is not send . 5. in last record update set boolean variable true and exit from loop .

this logic would be work fine in IOS/android both.

Read More

Thursday, April 6, 2017

(S)CSS architecture: alternative backgrounds styling

Leave a Comment

I'm using 'component' approach to CSS as in SMACSS / ITCSS, I'm still scratching my head about styling sections with alternative (dark) background.

e.g. Stripe has regular (dark text on white) and alternative (white text on dark) sections.

enter image description here

As I understand there are options assuming HTML:

<section class="dark">     <h2>Title</h2>     <p>Text</p>     <a href="#" class="btn">Action</a> </section> 

Style in context of section, e.g.:

.dark h2, .dark p, .dark btn {   color: white; } 

But a) context styling is not recommended; b) where does one put the styles? (Harry Roberts argues that in component's file)

Create alternative-colored components with modifiers

And change the HTML, e.g.:

.title--alt-color {color: white;} .text--alt-color {color: white; } ...  

But a) it doesn't work when you don't know which components will go in there; b) more work of managing HTML.

Maybe there is a better way to handle this?

3 Answers

Answers 1

What you're asking for is essentially to style a component within a section based on the section itself. Unfortunately this is impossible with CSS, as there is no parent selector in CSS. However, there is the inherit value, which allows you to style a component based on the rules defined by its parent - perfect for component-driven CSS.

In my opinion, the best way you can go about alternating background styling is to make use of the :nth-of-type pseudo-class on <section>:

section:nth-of-type(2n) {    background: #464646;    color: #fff;  }  
<section>      <h2>Title</h2>      <p>Text</p>      <a href="#" class="btn">Action</a>  </section>  <section>      <h2>Title</h2>      <p>Text</p>      <a href="#" class="btn">Action</a>  </section>  <section>      <h2>Title</h2>      <p>Text</p>      <a href="#" class="btn">Action</a>  </section>  <section>      <h2>Title</h2>      <p>Text</p>      <a href="#" class="btn">Action</a>  </section>

Considering :nth-of-type makes use of math to target elements, you can access literally any combination of elements you would like:

// Style every second element, starting with the first element section:nth-of-type(2n - 1)  // Style every third element, starting with the second element (2, 5, 8, etc.) section:nth-of-type(3n + 2) 

This way, it won't matter whether you're using a component-driven approach or not, as you'll be able to alternate the styling directly off of <section> itself.

Elements that inherit an attribute from internal stylesheets (such as <a> tag colour) will unfortunately still be styled based on the internal stylesheet, rather than rules defined by their parent.

You can get around this by either using context-styling:

section:nth-of-type(n) {    background: #464646;    color: #fff;  }    section:nth-of-type(n) a {    color: #fff;  }
<section>      <h2>Title</h2>      <p>Text</p>      <a href="#" class="btn">Action</a>  </section>

Or alternatively (and preferably) making use of the inherit value to tell every <a> tag to inherit its color from its parent:

section {    background: #464646;    color: #fff;  }  a {    color: inherit;  }
<section>      <h2>Title</h2>      <p>Text</p>      <a href="#" class="btn">Action</a>  </section>

Hope this helps!

Answers 2

In a component based approach the ideal way to do this is to have a mapping ready between backgrounds and foreground colours in your style guide. It should be a one to one mapping that should apply to majority of your elements. Have CSS classes defined for the same.

Next have a wrapper container for all your components. Its purpose is to impart text colours to its wrapped components. So the approach is to have a background colour class for the section and then a foreground colour class for the contents that runs applies to wrapper but runs the style through all the contents.

Note: Specific colour overrides can always reside inside your components file for instance using a highlight on some text etc.

The library that is suggested in the comments does the exact same thing. There is a primary and secondary colour in the theme object. The primary applied to the section and secondary is passed on to the individual components as context. I suggest having it passed only to the components' wrapper.

A somewhat clever way to have classes defined is like

t-wrapper-[colorName] 

Now this can be generic and colorName can come in as a context to your wrapper based on the background color

Hope this helps. Let me know if this answers what you need or you would need supporting snippets for the same.

Answers 3

You can set alternating background styling using nth-child(odd) and nth-child(even) pseudo-classes on <section>:

body{    margin:0;  }  section{      padding:20px;  }  section h2{    margin:0;  }  section:nth-child(odd){    background:#f5f7f6;    color:#333;  }  section:nth-child(even){    background: #113343;    color: #fff;  }
<section>      <h2>Title</h2>      <p>Text</p>      <a href="#" class="btn">Action</a>  </section>  <section>      <h2>Title</h2>      <p>Text</p>      <a href="#" class="btn">Action</a>  </section>  <section>      <h2>Title</h2>      <p>Text</p>      <a href="#" class="btn">Action</a>  </section>  <section>      <h2>Title</h2>      <p>Text</p>      <a href="#" class="btn">Action</a>  </section>

Read More

Thursday, April 21, 2016

How to implement nested protocols with boost::asio?

Leave a Comment

I'm trying to write a server that handles protocol A over protocol B.

Protocol A is HTTP or RTSP, and protocol B is a simple sequence of binary packets:

[packet length][...encrypted packet data...] 

So I want to use things like that:

boost::asio::async_read_until(socket, inputBuffer, "\r\n\r\n", read_handler); 

However, instead of socket use some pseudo-socket connected to Protocol B handlers.

I have some ideas:

  1. Forget about async_read, async_read_until, etc., and write two state machines for A and B.

  2. Hybrid approach: async_read_* for protocol B, state machine for A.

  3. Make internal proxy server.

I don't like (1) and (2) because

  • It's hard to decouple A from B (I want to be able to disable protocol B).

  • Ugly.

(3) just looks ugly :-)

So the question is: how do I implement this?

2 Answers

Answers 1

I won't go over boost::asio, since this seems more a design pattern than a networking one. I'd use the State Pattern. This way you could change protocol on the fly.

class net_protocol { protected:     socket sock;  public:     net_protocol(socket _sock) : sock(_sock) {}      virtual net_protocol* read(Result& r) = 0; };  class http_protocol : public net_protocol { public:     http_protocol(socket _sock) : net_protocol(_sock) {}      net_protocol* read(Result& r) {         boost::asio::async_read_until(socket, inputBuffer, "\r\n\r\n", read_handler);         // set result, or have read_handler set it         return this;     } };  class binary_protocol : public net_protocol { public:     binary_protocol(socket _sock) : net_protocol(_sock) {}      net_protocol* read(Result& r) {         // read 4 bytes as int size and then size bytes in a buffer. using boost::asio::async_read         // set result, or have read_handler set it          // change strategy example         //if (change_strategy)         //  return new http_strategy(sock);          return this;     } }; 

You'd initialize the starting protocol with

std::unique_ptr<net_protocol> proto(new http_protocol(sock)); 

then you'd read with:

//Result result; proto.reset(proto->read(result)); 

EDIT: the if() return new stragegy are, in fact, a state machine

if you are concerned about those async reads and thus can't decice which return policies, have the policy classes call a notify method in their read_handler

class caller {     std::unique_ptr<net_protocol> protocol;     boost::mutex io_mutex;  public:     void notify_new_strategy(const net_protocol* p) {          boost::unique_lock<boost::mutex> scoped_lock(mutex);         protocol.reset(p);     }      void notify_new_result(const Result r) { ... } }; 

If you don't need to change used protocol on the fly you would have no need of State, thus read() would return Result (or, void and call caller::notify_new_result(const Result) if async). Still you could use the same approach (2 concrete classes and a virtual one) and it would probably be something very close to Strategy Pattern

Answers 2

I have done something like your answer (2) in the past - using async_read calls to read the header first and then another async_read to read the length and forward the remaining things to a hand written state machine. But I wouldn't necessarily recommend that to you - You thereby might get zero-copy IO for protocol B but doing an IO call reading the 4-8 byte header is quite wasteful when you know there is always data coming behind it. And the problem is that your network abstraction for the 2 layers will be different - so the decoupling problem that you mention really exists.

Using a fixed length buffer, only calling async_read and then processing the data with 2 nested state machines (like you are basically proposing in answer (1)) works quite well. Your state machine for each would simple get pushed some new received data (from either directly the socket or from the lower state machine) and process that. This means A would not be coupled to B here, as you could directly push the data to the A state machine from asio, if the input/output data format matches.

Similar to this are the patterns that are used in the Netty and Facebook Wangle libraries, where you have handlers that get data pushed from a lower handler in the pipeline, perform their actions based on that input and output their decoded data to the next handler. These handlers can be state machines, but depending on the complexity of the protocol don't necessarily have to be. You can take some inspiration from that, e.g. look at some Wangle docs: https://github.com/facebook/wangle/blob/master/tutorial.md

If you don't want to push your data from one protocol handler to another but rather actively read it (most likely in an asynchronous fashion) you could also design yourself some interfaces (like ByteReader which implements an async_read(...) method or PacketReader which allows to read complete messages instead of bytes), implement them through your code (and ByteReader also through asio) and use them on the higher level. Thereby you are going from the push approach of data processing to a pull approach, which has some advantages and disadvantages.

Read More