General Mechanical

Topics relate to Mechanical Enterprise, Motion, Additive Print and more

• Federico Mazzoni
Subscriber

Hi!

I'm working with Transient Thermal module for a project about heat exchange simulation in a city.

My geometry consists on 4 "city blocks", a round area surrounding the city and a sky dome (see screenshot below). The meshing has roughly ~20k face elements.

Our goal with this project is to load data (materials/convection/radiation/temperature) and run a simulation, expecting to have similar results to what we already have from another software (we already have result data about how the city is heated/cooled throughout the day, and we're basically moving that project to ansys)

At this point we have modeled the materials/convection/conduction and got the results we somewhat expected. However we're facing a problem with how much data we can load into ansys in a reasonable time.

Besides modeling convection/materials, we also need to load the following data from outside ansys (csv files basically):

1) Sky dome temperatures.

For every face element of the sky dome, we use historical data to determine a different temperature for different time steps throughout the day. So we have something like: at 00:00 this piece of the sky is, say, 15 ºC. temperature. For 00:30 some other piece of sky is, say, 20 ºC. We are using scripting to get some calculations done and load temperatures into the model. Our sky dome has 297 face elements, so we end up with 297 Temperature elements, each one of them with its own tabular data that determines the temperature of the sky face element for each 30 min step throughout the day. We have no problems up to this point

2) Heat flux.

There are also heat fluxes that we need to load. We have one for every face element of the city. We also need tabular data in a time/value manner just like what we have for the sky dome and its temperatures, but in this case, for each and every face element of the city mesh. We would need somewhere around ~20k heat flux elements. Here is where we start facing performance issues. It just takes too long to load all of these Heat Fluxes, and it gets slower and slower and slower the more Heat Fluxes are already loaded to the model. We managed to load 4072 heat fluxes that correspond to the radiations that act on the roof of the city, which took about 8 hours, but it would take forever to load the remaining ~15k.
The way our script works is: we read some csv files and generate a json object (using OrderedDict) that has the information of which (mesh element,face id) has which flux value. Generating this json takes about 5 seconds. We then iterate through this json keys, create Heat Flux objects (using heat_flux = analysis.AddHeatFlux()) and assign them the data that we have for each one. This is what seems to be the bottleneck.

To confirm that, I went back to the sky dome and ran the script that creates Temperature Objects and measured the time it took to run it each time, without deleting previously loaded data, this is what I got:

-Create 150 Temperatures:  64 seconds

-Create another 150 Temperatures: 73 seconds

-Another 150 ... :89 seconds

- ... :109 seconds

- ... :126 seconds

- ... :149 seconds

So the question is, is there another way to load this data in the way we want, but reducing the time it takes to do so?
Thanks in advance, and if you need me to describe something else or just give more insight on what we´re trying to acomplish, just ask!

I leave you to the screenshots:

Our city:

The city surroundings, with bigger mesh elements (precision is less important the further away from the city we are)

Sky dome, with 297 face elements:

Example of temperature data for one element of the sky dome:

The heat fluxes we managed to load so far:

Example of one heat flux data:

A frame of the simulation:

• mjmiddle
Ansys Employee

If you place python script commands in an indented block like this, it will delay graphical update until the end of this code block:

with Transaction():
for element in element_list:

However, the GUI is still not really meant to handle 20k load objects in the Outline well. It would be better to do this in an APDL command object if you choose to do this by script. See SF command:

https://ansyshelp.ansys.com/account/secured?returnurl=/Views/Secured/corp/v231/en/ans_cmd/Hlp_C_SF.html

esel,s,elem,,element_number
nsle
sf,all,hflux,_table_variable

The _table_variable will be a table of variables over time for each element. Create a heat flux load object and write the Ansys input file to see the format.

Better yet, why not use “External Data” to import all the heat flux:

It must be difficult to know the heat flux at each element number. It’s more typical to know XYZ locations of each heat flux. For each time point, you can use another data file in the same “External Data” system and click in the “master” for one. Or just append another column of heat flux data to all one file. Each column would be the heat flux values for another time point.

Also it seems your dome is stagnant air, which is too much of an insulator. This does not model heat transfer well since even in dead air there are natural convection current that carries the heat much faster than modeling the air as a solid body. This would need a CFD solver, or I know some people have done some fudging with the convections or stagnant air properties. I don’t remember the specifics. We usually don’t model the stagnant air in FEA heat transfer analysis. The convection BCs handle natural convection to the environment.

• Federico Mazzoni
Subscriber

As you suggested, I'm now trying to use External Data to load the fluxes. I have a few questions, but I think I should add a bit more context:

1) The heat flux data that I need to use comes from an algorithm (not developed by me) that determines the heat flux per element of the mesh per time step, which results in 144 .csv files (one for each 10 min time step) with ~20k rows each (This seems like a good indicator for using External Data, too). This csv is basically one column with the flux values

2)  I also have a .csv file that helps me keep track of what are the coordinates of each face's nodes. The row order is the same as the .csv's in 1) That file looks like this:

Now, going back to my script, essentially what I do is sweep through various named selections via mechanical API (roof faces, wall faces, etc) and find the corresponding rows in the .csv from 2). So, after some work, I end up with a json like this:

heat_fluxes = {
["elementId_faceIndex"]: {
"0:00": heatFlux1,
"0:10": heatFlux2,
...
}
}

which is the object that I then iterate to create the Heat Flux elements.

Now, I'm not exactly sure on how the External Data coordinates works as opposed to choosing an element via ID and face Index and then assigning a heat flux to it. In other words:  how do I use the data about each face that I have (4 corner nodes' coordinates) to load the heat fluxes and have the same result? I tried some basic use cases to get familiar with External Data, more specifically, I created 2 heat flux elements, each one for 2 contiguous face elements of the mesh, and then tried to do the same with external data, obtaining different results. For only 1 heat flux I got good results, though. Most likely I'm doing something wrong here :)

• mjmiddle
Ansys Employee

Yes, I think External Data is the best choice to apply the heat flux.

I see one heat flux value for each row containing 4 node xyz locations in your data. How are you applying the one heat flux value for the 4 corner node locations? Are you applying this same value at each corner node location? This would conflict with the one heat flux value of an adjacent element applied at its 4 corner nodes, 2 of which are shared by the previous mentioned element. You would have to compute centroid of each of these 4 node locations in order to apply the heat flux at the centroid, The face centroid is the location that the heat flux is applied when using External Data import. So expecting that the values are applied at the node locations, when they are actually applied to element centroid locations is probably why you have some descrepancy between the two methods.

You can select 144 csv files for the external data if you want but since your python script controls the output of the data at each 10 sec interval why not just open the data file in append mode: open(file, ‘a’) and just write to one csv file?

I guess it would be hard to add columns instead of rows, but I think you can use the python csv module to do that. Or you can just save all the data to a large dictonary while your script is running for all time points, then at the end write it all out to one file in the format you want, having each column be a time point. This way you can use only one csv file. You would have multiple rows assigned as heat flux in the external data:

Then in Mechanical You have to “add row” to the table and set each row ID: