>The reasoning, very simply, is we support the more powerful boards that are fitted to the bigger clusters, and not the (possibly more common) boards in the gaming PCs. Small changes in chip/board architecture may require a significant code rewrite, and where the RAM is slightly lower the gpu may not be worth using. We no longer support certain operating systems for much the same reason.n1)Geforce xx70, xx80, xx90, and Titan boards are more powerful than any Quadro and Tesla. You don't need a doctoral degree to check the specs. Quadro's unique feature is excessive multimonitor support which is not used by HFSS&Maxwell. 10 years ago Quadro provided more (slower) RAM, but today the said Geforces have enough onboard RAM.nIn other terms, the said Geforce boards are absolute equivalent of Quadro boards and can be simply converted in each other by changing device-id, without any code rewrite.n2)Bigger clusters require more cheaper Geforce boards. Any company tries reducing costs. And purchasing unnecessary expensive CGI features is strongly undesired.n3)Small changes in chip/board architecture may require a significant code rewrite - it is simply NOT TRUE(in big bold letters). Nvidia, like Intel does not provide any microcode documentation. It is a highly protected company top-secret. Developers use standard APIs. CUDA api became popular just because it does not require significant code rewrite. Point.n>We only test so many boards and chips, so the gaming boards may work but we just haven't tried it.nWe did not ask for any test or support. Just why do you block the use of Geforce forcefully in software? Yo did not do it 5-7 years ago, when GPU acceleration trend started.n- Is it your decision to lock the hardware by device-id?n- is it a request from nVidia?n>and where the RAM is slightly lower the gpu may not be worth usingnwhy don't you live it to the customer? nGeForce RTX 3080 - 10Gb RAM $1100. Quadro RTX 4000 - 8Gb $1100nTell me, which is more powerful and which has more RAM onboard.nnIn short:nWhen the boards were rolled out, we could buy twice more Geforce boards than Quadros. Consequently, we would use the remaining funds for HFSS licenses. But we did not.nnmahesh2444nSince Rob tells me it is just the words, I will tell it here:nHFSS 2020 uses GPU only for models with 100% isotropic dielectrics. Whenether you have an anisotropy, it disables the GPU code. When I simulate ferrites in DrivenModal, I have to move to single-CPU 8-core workstation, because single-CPU-high-clock machines are just faster in HFSS. A cluster offers a fast sweep, but again, it is better doing 1task with 4-6 cores per workstation, rather combining all the tasks at one machine.nWhen you have all pure dielectrics, you will get a crazy benefit from GPU in model of any size. Just be sure to use a local license, or tweak your licensing server/network fine, because you will feel the penalty of license negotiation with the lic-server.nEigenmode still does not support GPU to my disappointment.nTransient seems supporting GPU, but I do not use it much to check in details. Transient still does not support any anisotropy, even in CPU code. Try defining a matrix property, and it will throw an error.nI did not try GPU with Maxwell yet, I have no license at my WS, and where we have license, there is no serious GPU to try. Anyway, it does not take that much time as HFSS.n