python - What can cause a seemingly infinite loop in my parallelized code? -


here code looks like:

def data_processing_function(some_data):     things some_data     queue.put(some_data)  processes = [] queue = queue()  data in bigdata:     if data meets criteria:         prepared_data = prepare_data(data)         processes += [process(target=data_processing_function,                               args=prepared_data)]         processes[-1].start()  process in processes:     process.join()  results = [] in range(queue.qsize()):     result += [queue.get()] 

when tried reduced dataset, went smoothly. when launched full dataset, looks script entered infinite loop during process.join() part. in desperate moved, killed processes except main one, , execution went on. hangs on queue.get() without notable cpu or ram activity.

what can cause this? code designed?


Comments

Popular posts from this blog

ios - RestKit 0.20 — CoreData: error: Failed to call designated initializer on NSManagedObject class (again) -

java - Digest auth with Spring Security using javaconfig -

laravel - PDOException in Connector.php line 55: SQLSTATE[HY000] [1045] Access denied for user 'root'@'localhost' (using password: YES) -