June 29, 2021 at 3:37 pmLumonoSubscriber
first, please excuse me if this post belongs to another Subforum. I was not sure whether to post in the Installation and Licensing Forum or in the Fluid Forum.
I am having an issue running Fluent in Parallel Mode using the DEM Collision Model. The Partitioning/Load Balancing seems not to be set up right, leading to the situation that all DPM Iterations are computed by node0 (at least that is my assumption out of the Memory usage check).
I searched the forum on this topic but unfortunately could not find a solution.
I am quite new using Fluent and to Parallel computing and have very limited knowledge of potential errors, etc. so I gave my best to provide all information necessary to give advice, but however, if something is missing, please let me know.
First my setup:
Setup->General->Steady (Pressure based)
Models->Discrete Phase (On)June 30, 2021 at 11:01 amRobAnsys EmployeeThe hybrid scheme will help balance the load, but also check where the particles are. It's not uncommon that the particles are all on one core to start with and it's only later on that you'll see some load being passed around. How high is the particle loading to need DEM collisions?
July 1, 2021 at 11:39 amLumonoSubscriberThank you for getting back to me !
Maybe it is useful to show my setup for better understanding.
I have a rotating disk, from where I inject the particles. The area where I want to inject is quite small, so your assumption that the particles are all one one core to start might be correct in my case.
I am continously injecting particles for the entire simulation duration.
The repeat interval is from 7e-6s for the top injection to 1.5e-3s for the bottom injection. I do get some collisions before the collision wall due to drag, which should dissapear once I get a kind of "steady" solution after enough iterations.
The problem is, that I do not have a predominant direction of the particles, because it changes from X to Y after the collision wall. For that reason I left the Load Balance direction at "Metis"-
If I plot Contours of Active Cell Partition it looks like this:
The particles should collide with the collision wall (45┬░ angle) and then leave the domain at the top.
I want to investigate the case when particles which have been reflected by the collision wall already, collide with particles from a different Injection Point that have not reached the collision wall yet. Here you can see an example (note: the case I want to investigate has more injection points, see screenshot above). Red shows collided particle tracks, blue just reflected ones.
I do not understand your question "How high is the particle loading to need DEM collisions?"
If I use the Repeat Intervals stated above and the Injection Point distribution, I have for example 1000 tracked particles after 0,004sec Simulation time and would have about 6000 tracked Particles after the desired simulation time of 0,03sec.
Could you clarify what you mean by that?
July 1, 2021 at 2:43 pmRobAnsys EmployeeLooking at that your particles are passing 2-3 partitions on the way to the wall and another two on the way out.
In your original note you just wanted DEM, you didn't specify that the particles would collide with a build up caused by bouncing off the wall. You'll need to run on to see the collisions, and watch convergence as the DEM model tends to require a small time step.
July 1, 2021 at 4:17 pmLumonoSubscriberThank you once again for your help
If I understand it right, the particles should be calculated by 5-6 CPUs then? If so, the problem that slows down my simulation has to be somewhere else...
I do understand that I need small particle time step size at a size of 1/20 of the calculated collision time (due to the max. allowed overlap of 10%). Also I agree that I need to run the simulation longer to get those collisions. Problem here is the endless simulation time (with my approach) which prevented me from achieving a converged behaviour.
If the number of CPUs computing the DPM Iterations overall is not responsible for the slow down, my guess would be that the collisions that occure even before reaching the collision wall are the reason why it gets so slow. You can see them in red for the top injection point
The collisions there occure because the drag influence on the first particles is higher than on the later ones -> if I simulate long enough to achieve some sort of "steady" state, this should not occure anymore either I guess.
Thanks to your Input, I got an idea and tried following:
1.) Only simulate the fluid until it is converged (100 Fluid iterations).
2.) Turn OFF DEM Collision Model
3.) Particle Time Step Size: 1e-6s, No of Particle Time Steps: 10 000, DPM Update Interval: 1, Number of (Fluid) Iterations: 10 -> Total (physical) simulation time: 0,1sec
To achieve the "steady" behaviour but to not slow the simulation down through collisions where I do not want them.
4.) Write Data
(this took only 2 hours to simulate)
5.) Load that Data and turn on DEM Collision Model
6.) Particle Time Step Size 1e-8s; No of Particle Time Steps: 10 000, DPM Update Interval: 1
7.) With already particles distributed in the domain, simulate only few Iterations to get the collisions at the collision wall/rebouncing particles.
Like this, it should not take so long anymore.
First question: Is there any mistake in this "plan" or could it be a solution to work faster?
However, I am having trouble executing that plan. I got to Step 4.) but there occures a problem preventing me from proceeding with the DEM Collision model.
Here you can see a screenshot I took after step 4.)
As you can see, the "blue" injection (Injection-9) got to slow and for that reason did not reach the boundary.
Instead of being aborted after some time, all of those particles remained in the domain for the entire simulation.
Without them, I would roughly have 1500 particles in Fluid at a time. A number that should be reasonable to use the DEM Collision Model.
Problem is, that I do not get those particles to disappear, even when changing tracking parameters.
The tracking parameter where on "default" for this first try -> Tracking Parameters -> Max. Number of Steps: 500 and Tracking Parameters-> Step Length Factor: 5.
I restarted the simulation from Step 2.) with these settings:
However, still all particles remained in the fluid. I only simulated 10 (fluid) iterations to get results faster, but as you can see it did not work.
This brings me to the second question:
How do the tracking parameters interact with the other Parameters (Particle Time Step Size, No of Particle Time Steps, DPM Iteration Interval, (Fluid)Iterations)?
From what I read in the User Guide the particles should have been aborted after 5 Particle Time steps. In my simulation it did not abort any particles though.
July 2, 2021 at 11:00 amRobAnsys EmployeeAfter 5 steps the particle will register as incomplete, but if it doesn't need 5 steps in the time step (ie it's not tracked as rigorously as it's not moving) it'll stay.
Running on a frozen flow is a good idea assuming the particles aren't slowed too much by the flow: if they are you'll see jets suddenly stopping as they hit stationary fluid. You also have particles bouncing up, then falling, this will stop following particles (so DEM time step is needed) but you're also tracking particles that are moving quickly. It's one of the reasons we're partnering with ESSS on Ansys Rocky. Fluent handles the flow field and Ansys Rocky the DEM, we have a mesh in Fluent and DEM tracking (meshless) in Ansys Rocky.
July 2, 2021 at 2:33 pmLumonoSubscriberThank you !
One last question then:
Since it is therefore not possible to remove the particles via the Tracking Parameters , is there any other possibility to get rid of the particles (that would remain in the boundary) after a designated time, instead having them remain inside my fluid "forever"?
July 2, 2021 at 2:47 pmRobAnsys EmployeeYes, you can set a kill criterion based on residence time. There's an example UDF in the manual (I think).
July 2, 2021 at 3:09 pmLumonoSubscriberThank you a lot for your help!
Viewing 8 reply threads
Ansys Innovation Space
- You must be logged in to reply to this topic.
Simulation World 2022
Check out more than 70 different sessions now available on demand. Get inspired as you hear from visionary companies, leading researchers and educators from around the globe on a variety of topics from life-saving improvements in healthcare, to bold new realities of space travel. Take a leap of certainty and check out a session today here.
Earth Rescue – An Ansys Online Series
The climate crisis is here. But so is the human ingenuity to fight it. Earth Rescue reveals what visionary companies are doing today to engineer radical new ideas in the fight against climate change. Click here to watch the first episode.
Subscribe to the Ansys Blog to get great new content about the power of simulation delivered right to your email on a weekly basis. With content from Ansys experts, partners and customers you will learn about product development advances, thought leadership and trends and tips to better use Ansys tools. Sign up here.Trending discussions
- Suppress Fluent to open with GUI while performing in journal file
- Heat transfer coefficient
- What are the differences between CFX and Fluent?
- Floating point exception in Fluent
- The solver failed with a non-zero exit code of : 2
- Difference between K-epsilon and K-omega Turbulence Model
- Getting graph and tabular data from result in workbench mechanical
- Time Step Size and Courant Number
- Mesh Interfaces in ANSYS FLUENT
- error in cfd post
Top Rated Tags
© 2022 Copyright ANSYS, Inc. All rights reserved.Ansys does not support the usage of unauthorized Ansys software. Please visit www.ansys.com to obtain an official distribution.