Friday, April 29, 2016

What's a good customizable web (reverse) proxy in Python with support for WebSockets?

1 comment

I'm looking to implement a simple reverse proxy in Python in which I can specify some custom logic in a request handler, and proxy the request to another URL based on the original request URL, user role (from cookie) etc. I also might want to inject some HTML to the returned page. It must handle WebSockets.

Something similar to this, but in Python: https://github.com/nodejitsu/node-http-proxy#setup-a-stand-alone-proxy-server-with-custom-server-logic

4 Answers

Answers 1

If you do not need to inspect or modify the websocket connections (i.e. you just want them to go through), mitmproxy should cover your use-case pretty well. At the very least, it has a simple script interface you can use to perform your modifications: http://docs.mitmproxy.org/en/latest/scripting/inlinescripts.html

You can enable WebSocket pass-through as specified in this answer.

(Obligatory Disclaimer: I'm one of the mitmproxy authors)

Answers 2

I'm not aware of a completely purpose built thing out there that does exactly this but it shouldn't be to hard to do with asyncio. This of course is assuming you're using Python 3, but if that's the case then http://aiohttp.readthedocs.org/en/stable/ should allow you to fake it. I wouldn't use that for large scale proxying without some serious testing, but for local development stuff it should be fine. Just going to take a bit programming to build that particular tool is all.

Answers 3

Check proxy.py if that fits into your requirements. Doesn't need anything other than the standard python library.

Answers 4

I've used and love tornado http://www.tornadoweb.org/en/stable/ for creating custom webserver/websocket servers in python. That said, if this needs to scale then it would be best of throwing tornado instances behind nginx and allow nginx to take care of the connection handling. Due to the way python (and nodejs too, actually) are single threaded (the infamous GIL) you're best off running one instance per core (maybe 1.5 or 2 per core, depends on the specific applications i/o and cpu usage patterns, in the perfect world 1 per core... only bench-marking will tell) and have nginx act as the reverse proxy.

If You Enjoyed This, Take 5 Seconds To Share It

1 comment: