Background:
I am building an application and the proposed architecture is Event/Message Driven on a microservice architecture.
The monolithic way of doing thing is that I've a User/HTTP request
and that actions some commands that have a direct synchronous response
. Thus, to respond to the same User/HTTP request is 'hassle free'.
The problem:
The user sends an HTTP request
to the UI Service (there are multiple UI Services) that fires some events to a queue (Kafka/RabbitMQ/any). a N of services picks up that Event/Message do some magic along the way and then at some point that same UI Service should pick that up a response and give that back to the user that originated HTTP request. Request processing is ASYNC
but the User/HTTP REQUEST->RESPONSE
is SYNC
as per your typical HTTP interaction.
Question: How do I send a response to the same UI Service that originated the action (The service thats interacting with the user over HTTP) in this Agnostic/Event driven world?
My research so far I've been looking around and it seems that some people are solving that problem using WebSockets.
But the layer of complexity is that there needs to be some table that maps (RequestId->Websocket(Client-Server))
which is used to ‘discover’ which node in the gateway has the websocket connection for some particular response. But even if I understand the problem and complexity I'm stuck that I can't find any articles that would give me info on how to solve this problem at the implementation layer. AND this still is not a viable option because of 3rd party integrations such as payments providers(WorldPay) that expect REQUEST->RESPONSE
- specially on the 3DS validation.
So I am somehow reluctant to think that WebSockets is an option. But even if WebSockets are ok for Webfacing apps, for API that connects to external systems is not a great architecture.
** ** ** Update: ** ** **
Even if long polling is an possible solution for a WebService API with a 202 Accepted
a Location header
and a retry-after header
it wouldn't be performant for a high concurrency & high ability website. Imagine a huge number of people trying to get the transaction status update on EVERY request they make and you have to invalidate CDN cache (go and play with that problem now! ha).
But most important and relatable to my case I've 3rd party APIs such as payment systems where the 3DS systems have automatic redirects that are handled by the payment provider system and they expect a typical REQUEST/RESPONSE flow
, thus this model would not work for me nor the sockets model would work.
Because of this use-case the HTTP REQUEST/RESPONSE
should be handled in the typical fashion where i have a dumb client that expect that the complexity of the precessing is handled in back-end.
So i am looking for a solution where externally I have a typical Request->Response
(SYNC) and the complexity of the status(ASYNCrony of the system) is handled internally
An example of the long polling, but this model wouldn't work for 3rd party API such as payments provider on 3DS Redirects
that are not within my control.
POST /user Payload {userdata} RETURNs: HTTP/1.1 202 Accepted Content-Type: application/json; charset=utf-8 Date: Mon, 27 Nov 2018 17:25:55 GMT Location: https://mydomain/user/transaction/status/:transaction_id Retry-After: 10 GET https://mydomain/user/transaction/status/:transaction_id
6 Answers
Answers 1
From a more general perspective - on receiving the request you can register a subscriber on the queue in the current request's context (meaning when the request object is in scope) which receives an acknowledgment from responsible services as they finish their jobs (like a state machine which maintains the progress of the total number of operations). When the terminating state is reached it returns the response and removes the listener. I think this will work in any pub/sub style message queue. Here is an overly simplified demo of what I am suggesting.
let Q = { pub: (event, data) => {}, sub: (event, handler) => {} } let controller = async (req, res) => { Q.pub("register-new-user", { username: req.body.username, password: req.body.password, tags: req.body.promoCode }) let p1 = new Promise((resolve, reject) => { Q.sub("user-added", ack => { resolve(ack) }) }) let p2 = new Promise((resolve, reject) => { Q.sub("promo-applied", ack => { resolve(ack) }) }) let sagaComplete = await Promise.all([p1, p2]) res.json({success: true}) }
As you can probably see this looks like a general pattern which can be abstracted away into a framework. This is what the saga pattern is essentially about. However, it is recommended that you push messages to the client using a socket connection instead of making the client wait on the completion of the saga. This pattern can be expanded upon to enable rollbacks in case of partial failures, or timeouts and makes your system more manageable.
Answers 2
Unfortunately, I believe you'll likely have to use either long polling or web-sockets to accomplish something like this. You need to "push" something to the user, or keep the http request open until something comes back.
For handling getting the data back to the actual user, you could use something like socket.io. When a user connects, socket.io creates an id. Anytime a user connects, you map the userid to the id socket.io gives you. Once each request has a userid attached to it, you can emit the result back to the correct client. The flow would be something like this:
web requests order (POST with data and userId)
ui service places order on queue (this order should have userId)
x number of services work on order (passing userId along each time)
ui service consumes from topic. At some point, data appears on the topic. The data it consumes has the userId, the ui service looks up the map to figure out which socket to emit to.
Whatever code is running on your UI would need to also be event-driven, so it would deal with a push of data without the context of the original request. You could use something like redux for this. Essentially, you'd have the server creating redux actions on the client, it works pretty well!
Hope this helps.
Answers 3
What about using Promises? Socket.io could also be a solution if you want realtime.
Have a look at CQRS also. This architectural pattern fits the event driven model and microservice architecture.
Even better. Have a read of this.
Answers 4
Below is a very bare-bones example how you could implement the UI Service so it works with a normal HTTP Request/Response flow. It uses the node.js events.EventEmitter
class to "route" the responses to the right HTTP handler.
Outline of the implementation:
Connect producer/consumer client to Kafka
- The producer is used to send the request data to the internal micro-services
- The consumer is used to listen for data from the micro-services that means the request has been processed and I assume those Kafka items also contain the data that should be returned to the HTTP client.
Create a global event dispatcher from the
EventEmitter
class- Register a HTTP request handler that
- Creates an UUID for the request and includes it in the payload pushed to Kafka
- Registers a event listener with our event dispatcher where the UUID is used as the event name that it listens for
- Start consuming the Kafka topic and retrieve the UUID that the HTTP request handler is waiting for and emit an event for it. In the example code I am not including any payload in the emitted event, but you would typically want to include some data from the Kafka data as an argument so the HTTP handler can return it to the HTTP client.
Note that I tried to keep the code as small as possible, leaving out error and timeout handling etc!
Also note that kafkaProduceTopic
and kafkaConsumTopic
are the same topics to simplify testing, no need for another service/function to produce to the UI Service consume topic.
The code assumes the kafka-node
and uuid
packages have been npm
installed and that Kafka is accessible on localhost:9092
const http = require('http'); const EventEmitter = require('events'); const kafka = require('kafka-node'); const uuidv4 = require('uuid/v4'); const kafkaProduceTopic = "req-res-topic"; const kafkaConsumeTopic = "req-res-topic"; class ResponseEventEmitter extends EventEmitter {} const responseEventEmitter = new ResponseEventEmitter(); var HighLevelProducer = kafka.HighLevelProducer, client = new kafka.Client(), producer = new HighLevelProducer(client); var HighLevelConsumer = kafka.HighLevelConsumer, client = new kafka.Client(), consumer = new HighLevelConsumer( client, [ { topic: kafkaConsumeTopic } ], { groupId: 'my-group' } ); var s = http.createServer(function (req, res) { // Generate a random UUID to be used as the request id that // that is used to correlated request/response requests. // The internal micro-services need to include this id in // the "final" message that is pushed to Kafka and consumed // by the ui service var id = uuidv4(); // Send the request data to the internal back-end through Kafka // In real code the Kafka message would be a JSON/protobuf/... // message, but it needs to include the UUID generated by this // function payloads = [ { topic: kafkaProduceTopic, messages: id}, ]; producer.send(payloads, function (err, data) { if(err != null) { console.log("Error: ", err); return; } }); responseEventEmitter.once(id, () => { console.log("Got the response event for ", id); res.write("Order " + id + " has been processed\n"); res.end(); }) }); s.timeout = 10000; s.listen(8080); // Listen to the Kafka topic that streams messages // indicating that the request has been processed and // emit an event to the request handler so it can finish. // In this example the consumed Kafka message is simply // the UUID of the request that has been processed (which // is also the event name that the response handler is // listening to). // // In real code the Kafka message would be a JSON/protobuf/... message // which needs to contain the UUID the request handler generated. // This Kafka consumer would then have to deserialize the incoming // message and get the UUID from it. consumer.on('message', function (message) { responseEventEmitter.emit(message.value); });
Answers 5
As I was expecting - people try to fit everything into a concept even if it does not fit there. This is not a criticism, this is an observation from my experience and after reading your question and other answers.
Yes, you are right that microservices architecture is based on asynchronous messaging patterns. However, when we talk about UI, there are 2 possible cases in my mind:
UI needs a response immediately (e.g. read operations or those commands on which user expects answer right away). These don't have to be asynchronous. Why would you add an overhead of messaging and asynchrony if the response is required on the screen right away? Does not make sense. Microservice architecture is supposed to solve problems rather than create new ones by adding an overhead.
UI can be restructured to tolerate delayed response (e.g. instead of waiting for the result, UI can just submit command, receive acknowledgement, and let the user do something else while response is being prepared). In this case, you can introduce asynchrony. The gateway service (with which UI interacts directly) can orchestrate the asynchronous processing (waits for complete events and so on), and when ready, it can communicate back to the UI. I have seen UI using SignalR in such cases, and the gateway service was an API which accepted socket connections. If the browser does not support sockets, it should fallback to the polling ideally. Anyway, important point is, this can only work with a contingency: UI can tolerate delayed answers.
If Microservices are indeed relevant in your situation (case 2), then structure UI flow accordingly, and there should not be a challenge in microservices on the back-end. In that case, your question comes down to applying event-driven architecture to the set of services (edge being the gateway microservice which connects the event-driven and UI interactions). This problem (event driven services) is solvable and you know that. You just need to decide if you can rethink how your UI works.
Answers 6
Good question. My answer to this is, introduce synchronous flows in the system.
I am using rabbitMq so i don't know about kafka but you should search for kafka's synchronous flow.
WebSockets does seem one overkiil.
Hope that helps.
0 comments:
Post a Comment