d08941008
Subscriber
Hi, I still have problem with the sweep and optimization on the non-GUI linux cluster.
For now, I've tried all the possibilities I can think of and I still can't use the GUI-supported command on cluster and I can't log in only with the private ssh key via "ssh -i " even after I ssh-keygen on the local PC, scp the public key, and cat into the authorized_keys file on the remote login cluster (i.e. the submission cluster). So I can't optimize or sweep with the GUI-supported job manager on the local PC. I'm pretty sure that the pay-for-use cluster don't let users to ssh without psw and TOTP but with only private key, and my guess is that it's due to the different psw between the login node and the compute nodes and users don't get to know the psws of compute nodes. So I'm now stuck and looking for other possible ways.
Since the parameterized structure can be achieved by lsf file but still the GUI *-solutions should be called, and there's no usage of calling engine with the optional argument of lsf file in the terminal, sweep or optimization becomes difficult to be performed on the non-GUI supported linux cluster if the passwordless ssh method is not allowed.
I'm curious about that if it's possible to run the parameterized .fsp file by using the "userprop" in the ::model, say calling it in terminal via "*engine-solver-mpi *.fsp -R 5e-6" implying the radius of the model is assigned with 5 um and the model is gonna be solved and managed by engine-solver-mpi, or run the parameterized model by the combination of engine-solver-mpi and the lsf file and the results can be obtained through the script feature "write" into the txt file, so that the GUI-supported job manager can be avoided and users might be able to apply user-defined job scheduler script in shell or in python file for submission?

Or maybe the job manager on the local GUI-supported PC can have one more login method with interactive console, allowing users input psw and TOTP by themselves, and all the jobs of each sweep or each generation of optimization can be submitted within only one job script (avoiding the queue waiting time and under the situation of the queue stat is not always given) like this:
#!......
#PBS ...
#PBS ...
#PBS ...
.
.

module load ...
-logall -remote
-logall -remote
-logall -remote
-logall -remote
.
.
.
.
But by this way, the manually interactive ssh login for job submission have to be performed for each sweep or each generation.


Or maybe I could perform sweep or optimization, right click (after creation of job files) on the job for pause, scp and ssh to the cluster for data exchangement and job submission via interactive terminal by manually, download back to local PC for substitution of the job files, hit the "quit and don't save" or "force quit" to end the sweep or generation and the job manager will have to collect the results of all job files, and then again the steps above if there's next generation ??
Still, this might require the ssh login manually for each sweep or each generation.



Anyway, any advice would be highly appreciated if you could give us some. Thanks.

Have a nice weekend