bssrdf

博客园 首页 新随笔 联系 订阅 管理

It's time to summarize the implementation of my particle level set fluid simulator and plan for future development.

 It started off by combining Stam's stable fluid solver and Foster and Fedkiw's particle enhanced level set approach. Then I immediately jumped to some recent developments such as, vortex particles, MacCormack Method, and divergence free velocity extrapolation. Some of these extensions are successful, and some not. During the whole process, the foundational method for levelset has been fast marching method (FMM) for reinitialization and semi-lagrangian method for advection. Helped by the particles, this has been what Enright et al. (2004) advocated and a series of other authors used (Losasso's octree method, Gundelman's coupling to thin shells, and Losasso's multiple level sets and melting solids into liquids). Octree is great, but its implementation is too complex. Without it, however, the results look very bad unless a high resolution grid is used. But the original paper by Enight et al. (2002) produced highly realistic result. How can that be?

In recent months, I have been modifying the code to replace FMM with solving reinitialization equation with 5th order WENO and 3rd order Runge-Kutta. Futhermore, level set advection has also been done by the 5th order WENO and 3rd order Runge-Kutta. Particles are advected with 3rd order Runge-Kutta. Essentially I have turned the code into Enright's 2002 incarnation. The disadvantage is the performance hit. But there is advantage too. Now the code is readily parallelized for multi-threading and multi-processors. 

Here is the present status, although it's 6 years behind the front of the current research. You may notice all those mass loss when liquid sheets thin out. This is the best I can do on this grid resolution.


 

 

What's about next? Clearly octree is an ideal candidate, as it can resolve those thin sheets the underlying grid is missing. But it requires some major data structure changes in the code. In addition, for any decent production simulation, even an octree couldn't handle all the details on a single processor. So that leaves the only option: parallel execution. This should be the way my solver goes. Fortunately, my previous code changes make the task relatively easy. For example, ILM couldn't finish the shots for "Poseidon" unless Fedkiw's group at Stanford made their Physbam software multi-processor running enabled. They used 8 to 16 processors. Frank Losasso ran the hero simulation on 40 processors for the maelstrom shot in "Pirates of the Caribbean: At World's End". Moreover, only when a simulation is run at a sufficient resolution, the removed particles are ejected in areas that visually match areas where, in reality, the water’s surface tension breaks, so they can be used to represent spray and bubbles. See image below for reference.

ILM used a new fluid dynamics engine developed in cooperation with Stanford University to create the ocean,

including wave, turbulence and bubbles. 

posted on 2008-12-24 09:44  BSSRDF  阅读(1107)  评论(1编辑  收藏  举报