python http server, multiple simultaneous requests -


i have developed rather extensive http server written in python utilizing tornado. without setting special, server blocks on requests , can handle 1 @ time. requests access data (mysql/redis) , print out in json. these requests can take upwards of second @ worst case. problem request comes in takes long time (3s), easy request comes in after take 5ms handle. since first request going take 3s, second 1 doesn't start until first 1 done. second request takes >3s handled.

how can make situation better? need second simple request begin executing regardless of other requests. i'm new python, , more experienced apache/php there no notion of 2 separate requests blocking each other. i've looked mod_python emulate php example, seems block well. can change tornado server functionality want? everywhere read, says tornado great @ handling multiple simultaneous requests.

here demo code i'm working with. have sleep command i'm using test if concurrency works. sleep fair way test concurrency?

import tornado.httpserver import tornado.ioloop import tornado.web import tornado.gen import time  class mainhandler(tornado.web.requesthandler):     @tornado.web.asynchronous     @tornado.gen.engine      def handleping1(self):         time.sleep(4)#simulating expensive mysql call         self.write("response browser ....")         self.finish()      def get(self):         start = time.time()         self.handleping1()         #response = yield gen.task(handleping1)#i see tutorials around suggest using ....          print "done request ...", self.request.path, round((time.time()-start),3)    application = tornado.web.application([         (r"/.*", mainhandler), ])  if __name__ == "__main__":     http_server = tornado.httpserver.httpserver(application)     port=8833;     http_server.listen(port)     print "listening on "+str(port);     tornado.ioloop.ioloop.instance().start() 

thanks help!

edit: remember redis single threaded, if have concurrent requests, bottleneck redis. won't able process more requests because redis won't able process them.

tornado single-threaded, event-loop based server.

from documentation:

by using non-blocking network i/o, tornado can scale tens of thousands of open connections, making ideal long polling, websockets, , other applications require long-lived connection each user.

concurrency in tornado achieved through asynchronous callbacks. idea little possible in main event loop (single-threaded) avoid blocking , defer i/o operations through callbacks.

if using asynchronous operations doesn't work (ex: no async driver mysql, or redis), way of handling more concurrent requests run multiple processes.

the easiest way front tornado processes reverse-proxy haproxy or nginx. tornado doc recommends nginx: http://www.tornadoweb.org/en/stable/overview.html#running-tornado-in-production

your run multiple versions of app on different ports. ex:

python app.py --port=8000 python app.py --port=8001 python app.py --port=8002 python app.py --port=8003  

a rule of thumb run 1 process each core on server.

nginx take care of balancing each incoming requests different backends. if 1 of request slow (~ 3s) have n-1 other processes listening incoming requests. possible – , – processes busy processing slow-ish request, in case requests queued , processed when process becomes free, eg. finished processing request.

i recommend start nginx before trying haproxy latter little bit more advanced , bit more complex setup (lots of switches tweak).

hope helps. key take-away: tornado great async i/o, less cpu heavy workloads.


Comments

Popular posts from this blog

ios - RestKit 0.20 — CoreData: error: Failed to call designated initializer on NSManagedObject class (again) -

java - Digest auth with Spring Security using javaconfig -

laravel - PDOException in Connector.php line 55: SQLSTATE[HY000] [1045] Access denied for user 'root'@'localhost' (using password: YES) -