Last time I introduced you to using RSM to submit to your local machine. This probably seemed pretty similar to the normal way of solving things but you can apply the same workflow to solving to remote computers too! Recall how ergonomic it was to solve with RSM from the last post and compare to how it normally goes on a remote computer:
- Saving/Archiving the project or drilling down in the file structure to find specific files
- Copying the files via network share, Dropbox or similar
- Logging into the remote machine, starting the software, possibly figuring out how to run it in batch, opening your files and starting the solution process
- Copying the results files back to your computer, deciding how & whether to overwrite and version them
- Load results into the software
All of these steps are handled for you with RSM. When you don’t need to have a graphics card on a compute machine you can specialize it more towards computation, as noted on our ANSYS Hardware Requirements Page
Setting this up just become both more simple and more complex with the R18. The major way it became simpler is that it doesn’t require you to join a Windows Domain, at least for a simple setup that runs on one remote computer. This means that you do not need to have complicated IT infrastructure just to take advantage of your compute resources. A motivated engineer with administrative access to the compute server can set this up on their own. For more complex configurations you will want to get IT involved, such as having multiple machines in a queue or distributed solves across computers.
Here is some terminology:
- Remote Solve Manager: Integrates directly with ANSYS and provides facilities for transferring simulation data between machines and solution monitoring. Meant to be the integration layer between ANSYS and existing compute infrastructure.
- ANSYS RSM Cluster (ARC): Added in R18, ARC is the ANSYS provided compute infrastructure in the event that you don’t already have a commercial queuing system, such as Microsoft HPC or LSF. Needs to be present when using RSM unless it is replaced by something else.
- Master & Compute Nodes: ARC nodes can have master or compute roles, suitable for setting up a cluster of machines for distributed runs. For the purposes of this guide, the Master & Compute Nodes will be one and the same.
- RSM Queue vs ARC Queue: Unfortunately, you will have to set up queues for both your queuing system, such as ARC, AND in RSM. This process is described as mapping cluster queues to RSM queues.
- Cluster: For the purposes of this guide, each “Cluster” is a single machine. We’re punting on all the more complex setups!
Remote Machine Setup
You will only need to do this once. Afterwards, everyone who wants to submit to this machine just needs to follow the Local Machine Setup instructions further down.
Start an admin prompt
A lot of this will need to be done in the command line. Right click Start Menu -> Command Prompt and select Run As Administrator
For Linux, you will need to make sure to use the sudo command or login as root or a sufficiently privileged account.
Enable the RSM service
Type in the command prompt:
You should see a lot of text go by in the console and no error messages.
For Linux you should be able to replace the enclosing % with a leading $ and the .exe with a .sh
Add ARC tools to the path
ARC is mostly setup via the command line. There are a number of command line utilities that we want to add to the `PATH` so that we don’t have to prefix the entire directory each time:
If you did this incorrectly, you’ll get a “___ is not recognized as an internal or external command” message in the subsequent steps.
For Linux, used export and switch out the % variable substitutions as above.
Switch ARC to advanced mode and install the master & node services
In order to do anything with ARC beyond what the ANSYS install process does for you automatically, you need to switch it to advanced mode:
This computer will both receive and run jobs so the master and node services need to be enabled:
If you are so inclined, you can set limits on the number of cores and disk space. Run the following command to see the help for the command on how to do so:
arcconfig node modify
Add ALL THE THINGS to the firewall
You’ll need to use the Advanced Windows Firewall and set filtering rules with port numbers, which is a bit oldschool. To get to the advanced firewall program, I usually press the Start Menu button, type in
firewall and select Windows Firewall With Advanced Security.
Once you have it open:
- Click Inbound Rules on the left pane
- Click New Rule on the right pane
- Select Port and click Next
- Keep TCP selected and for the port numbers enter `11180,12180,13180,40000-59999`
- Select Next three times
- Enter a name for the rule (I like to use `RSM/ARC Ports`)
- Select Finish
For Linux users, I would recommend to try to use Uncomplicated Firewall (ufw) or get some help from IT here.
Cache the credentials:
You may also cache your password, just to be safe.
And enter your password at the command prompt.
Create a staging folder that is shared
The last thing that you need to do is create a Staging folder somewhere for remote computers to upload analysis files to:
- Create a folder named `Staging` (or whatever you would like)
- Right click on it and select Share With -> Specific People
- In the window that pops up, you can just specify the current user
You should have an address for the folder that looks something like
Save this for later.
For Linux, you should look into NFS or SMB.
Local Machine Setup
Compared to the remote machine setup, the local machine is a breeze. Everything is done through the GUI of the new RSM Configuration Utility
There is a new program that you use to add things you can submit to. In the Start Menu, go to `ANSYS 18.0 -> Remote Solve Manager -> RSM Configuration`
Once here, follow these steps:
- Press the Add Cluster button or right click and select Add Cluster in the tree
- Name the cluster (this is what will show up in Workbench/Mechanical)
- Enter the hostname on the network in Submit Host and select the operating system
- Leave the Cluster type as ARC
- Select Apply and then select the File Management tab
- Select RSM internal file transfer mechanism (ideally you are on a fast local network)
- Enter the staging folder path that you setup a few steps ago
- Select Apply and then select the Queues tab
- Click the Import/Refresh Cluster Queues button
- Enter the credentials (machine\username :: password) like you are logging onto the remote computer and select Ok
- If successful the RSM queues will be populated with queues from the cluster. Make sure that the default queue is enabled
- Select Apply
- Test the queue by pressing the Test button
Your test should be successful after a few moments. Congrats, your new RSM queue is ready to be used!
Keep up to date on hardware and ANSYS simulation news and tips by subscribing to our newsletter:
ANSYS Fluent can track the motion of particles through a fluid using the Discrete Phase Model (DPM). In the most basic approach to implementing particles in a CFD analysis the user specifies: injection location, speed, mass flow rate, particle size & material.
Particles can interact with boundary walls with options: reflection, escape, trapping, sliding, or forming a film.
DPM offers an impressive range of advanced capabilities, including:
- Random effects of small-scale turbulent eddies
- Particle size distribution
- Two-way interaction where particles influence flow
- Particle-particle interaction for dense populations
- Particle heat transfer and vaporization/boiling of droplets & bubbles
“Particle Tracks” is the primary tool to postprocess particle behavior. In addition to creating a graphical display of particle paths, this tool offers a powerful option to quantify particle behavior. From the Particle Tracks menu, activate Report Type -> Summary, and Report To -> Console. The text interface will report statistics for individual boundaries where particles exit the system, including Particle Count, Elapsed Time and Mass Flow.
Want to see something cool? Here is one thing that you can try right now:
- Open an ANSYS Workbench project with an analysis in it
- Click the Job monitor button to bring up the Job Monitor. It should be pretty uninteresting right now
- Set it to solve with “Submit to a Remote Queue” via right clicking on Solution, selecting Properties and setting the Solution Process -> Update Option to Submit to Remote Solve Manager (pictured below)
- Solve your analysis through workbench via the Update toolbar button
- Open ANOTHER Workbench with an analysis in it (Trust me on this. It doesn’t take a license just to open Workbench)
- Set the solution options and solve as in steps 3 & 4
- Observe as there are no license errors or overloaded processors and your solutions from unrelated projects are sequentially solved to completion
What magical place did this simulation job go where it is organized and solved in harmony with jobs from unrelated Workbench projects? The answer is to your local queue. There is a service that is installed right along with the rest of ANSYS called Remote Solve Manager. While it can do much more with a little more configuration, it’s nice to see what you get with exactly no effort on your part.
Remote Solve Manager in Mechanical
Just because it’s not solving in the foreground, doesn’t mean you can’t be actively monitoring the solution. Here’s what it looks like in Mechanical:
Note that submitting from Mechanical will use a separate license for the solve. Make sure you have a license to spare or a prep/post license
Remote Solve Manager In Fluent
Similarly with Fluent you can monitor residuals, probes and solution animations, through the Monitor Solution right click menu.
Remote Solve Manager in CFX
In CFX, the workflow is almost unchanged. You can use the CFX Solution Monitor as usual.
Joint us next time where I’ll show you the simple way to put the “Remote” in Remote Solve Manager!
Keep up to date on hardware and ANSYS simulation news and tips by subscribing to our newsletter:
Ozen Engineering, Inc.
Prestigious companies in Northern California turn to Ozen Engineering as the single-source of reliable simulation solutions. Although Ozen Engineering is headquartered in the heart of Silicon Valley, we collaborate with best-in-class companies worldwide to optimize product design performance and improve product development processes for our clients wherever they are located and across a wide variety of industries. We are dedicated to supporting our clients. We are passionate about developing accurate simulation and realistic modeling as core competencies within client companies and helping them realize unparalleled results from their FEA and CFD investments.