I've seen situations like this before and it is a bit cryptic especially without all the details. I think the issue is that python is not handling the memory very efficiently. The gradient calculation requires the shape derivative of the device as you vary the parameters, which it seems like the point of failure. During the solve steps CPU will be used, but the gradient (ie dots in terminal) is a significant part of the workflow that will be memory intensive because you are dealing spatial index data. If the simulation itself is well within the 32GB limit of your machine then it shouldn't theoretically be causing issues (unless you are using d_eps_num_threads?); however, I suspect that the datasets are not being cleared from the scope of the workspace in a timely manner as they are no longer needed. I suspect more RAM would help; not necessarily solving this issue but preventing you from bumping up against this limit so often. It may be helpful to try 45 params and see if the issue persists? Try another larger RAM machnine?
Other settings that may be adding to the memory requirements is if you are saving all the simulations and 3D indices, disabling these options may help with memory overflow. It may be that you have a very high number of frequency points?
Sorry I can't be more help, it is certainly a challenge, but an unavoidable one at this time. It is not always clear what the requirements of the computer will be for the optimization beforehand, but the only dependable solution I've found is to have the RAM buffer. Please let me know how it goes.