python - What can cause a seemingly infinite loop in my parallelized code? -
here code looks like:
def data_processing_function(some_data): things some_data queue.put(some_data) processes = [] queue = queue() data in bigdata: if data meets criteria: prepared_data = prepare_data(data) processes += [process(target=data_processing_function, args=prepared_data)] processes[-1].start() process in processes: process.join() results = [] in range(queue.qsize()): result += [queue.get()]
when tried reduced dataset, went smoothly. when launched full dataset, looks script entered infinite loop during process.join()
part. in desperate moved, killed processes except main one, , execution went on. hangs on queue.get()
without notable cpu or ram activity.
what can cause this? code designed?
Comments
Post a Comment