Python script hangs when running lstm on multiple cores



I am trying to create different lstm models on different data using multiprocessing Pool.
The program hangs when it tries to create a lstm layer.

def create_model(neurons, X):
    model = Sequential() 
    model.add(LSTM(neurons, input_shape=(X.shape[1], X.shape[2]), return_sequences=False))
    model.add(Dense(1, kernel_initializer='uniform', activation='relu'))
    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mae', 'mse','accuracy'])
    return model

from pathos.multiprocessing import ProcessingPool
from itertools import repeat
pool = ProcessingPool(4)
neurons = [50, 100]
results =, neurons, repeat(X))

The script hangs at the lstm layer creation. Program works if I replace lstm by dense. What is wrong in the code?


Hi @pooja16,

I had a question, will this as an input to LSTM work fine? Try using neurons =50, and neurons =100 in two separate runs. Let me know if that works.


Here pool creates different processes, one has (neurons= 50, X) as parameters and another has (neurons=100, X). So it’s not a problem. The program hangs at the LSTM layer creation.