document materials, stimuli, measurements, rendering
This commit is contained in:
82
README.md
82
README.md
@@ -176,11 +176,89 @@ although steps 1 and 3 vary heavily based on the user configuration of the simul
|
|||||||
|
|
||||||
# Features
|
# Features
|
||||||
|
|
||||||
TODO: document Material options, Stimulus options, Measurement options, Rendering options.
|
this library takes effort to separate the following from the core/math-heavy "simulation" logic:
|
||||||
|
|
||||||
|
- Stimuli
|
||||||
|
- Measurements
|
||||||
|
- Render targets (video, CSV, etc)
|
||||||
|
- Materials (conductors, non-linear ferromagnets)
|
||||||
|
- Float implementation (for CPU simulations only)
|
||||||
|
|
||||||
|
the simulation only interacts with these things through a trait interface, such that they're each swappable.
|
||||||
|
|
||||||
|
common stimuli type live in [src/stim.rs](src/stim.rs).
|
||||||
|
common measurements live in [src/meas.rs](src/meas.rs).
|
||||||
|
common render targets live in [src/render.rs](src/render.rs). these change infrequently enough that [src/driver.rs](src/driver.rs) has some specialized helpers for each render backend.
|
||||||
|
common materials are spread throughout [src/mat](src/mat/mod.rs).
|
||||||
|
different float implementations live in [src/real](src/real.rs).
|
||||||
|
if you're getting NaNs, you can run the entire simulation on a checked `R64` type in order to pinpoint the moment those are introduced.
|
||||||
|
|
||||||
|
## Materials
|
||||||
|
|
||||||
|
of these, the materials have the most "gotchas".
|
||||||
|
each cell owns an associated material instance.
|
||||||
|
in the original CPU implementation of this library, each cell had a `E` and `H` component,
|
||||||
|
and any additional state was required to be held in the material. so a conductor material
|
||||||
|
might hold only some immutable `conductivity` parameter, while a ferromagnetic material
|
||||||
|
might hold similar immutable material parameters _and also a mutable `M` field_.
|
||||||
|
|
||||||
|
spirv/rust-gpu requires stronger separation of state, and so this `M` field had to be lifted
|
||||||
|
completely out of the material. as a result, the material API differs slightly between the CPU
|
||||||
|
and spirv backends. as you saw in the examples, that difference doesn't have to appear at the user
|
||||||
|
level, but you will see it if you're adding new materials.
|
||||||
|
|
||||||
|
### Spirv Materials
|
||||||
|
|
||||||
|
all the materials usable in the spirv backend live in `src/sim/spirv/spirv_backend/src/mat.rs`.
|
||||||
|
to add a new one, implement the `Material` trait in that file on some new type, which must also
|
||||||
|
be in that file.
|
||||||
|
|
||||||
|
next, add an analog type somewhere in the main library, like `src/mat/mh_ferromagnet.rs`. this will
|
||||||
|
be the user-facing material.
|
||||||
|
now implement the `IntoFfi` and `IntoLib` traits for this new material inside `src/sim/spirv/bindings.rs`
|
||||||
|
so that the spirv backend can translate between its GPU-side material and your CPU-side/user-facing material.
|
||||||
|
|
||||||
|
finally, because cpu-side `SpirvSim<M>` is parameterized over a material, but the underlying spirv library
|
||||||
|
is compiled separately, the spirv library needs specialized dispatch logic for each value of `M` you might want
|
||||||
|
to use. add this to `src/sim/spirv/spirv_backend/src/lib.rs` (it's about five lines: follow the example of `Iso3R1`).
|
||||||
|
|
||||||
|
|
||||||
|
### CPU Materials
|
||||||
|
|
||||||
|
adding a CPU material is "simpler". just implement the `Material` trait in `src/mat/mod.rs`.
|
||||||
|
either link that material into the `GenericMaterial` type in the same file (if you want to easily
|
||||||
|
mix materials within the same simulation), or if that material can handle every cell in your
|
||||||
|
simulation then instantiance a `SimState<M>` object which is directly parameterized over your material.
|
||||||
|
|
||||||
|
|
||||||
|
## What's in the Box
|
||||||
|
|
||||||
|
this library ships with the following materials:
|
||||||
|
- conductors (Isomorphic or Anisomorphic). supports CPU or GPU.
|
||||||
|
- linear magnets (defined by their relative permeability, mu\_r). supports CPU only.
|
||||||
|
- a handful of ferromagnet implementations:
|
||||||
|
- `MHPgram` specifies the `M(H)` function as a parallelogram. supports CPU or GPU.
|
||||||
|
- `MBPgram` specifies the `M(B)` function as a parallelogram. supports CPU or GPU.
|
||||||
|
- `MHCurve` specifies the `M(H)` function as an arbitrary polygon. requires a new type for each curve for memory reasons (see `Ferroxcube3R1`). supports CPU only.
|
||||||
|
|
||||||
|
measurements include ([src/meas.rs](src/meas.rs)):
|
||||||
|
- E, B or H field (mean vector over some region)
|
||||||
|
- energy, power (net over some region)
|
||||||
|
- current (mean vector over some region)
|
||||||
|
- mean current magnitude along a closed loop (toroidal loops only)
|
||||||
|
- mean magnetic polarization magnitude along a closed loop (toroidal loops only)
|
||||||
|
|
||||||
|
output targets include ([src/render.rs](src/render.rs)):
|
||||||
|
- `ColorTermRenderer`: renders 2d-slices in real-time to the terminal.
|
||||||
|
- `Y4MRenderer`: outputs 2d-slices to an uncompressed `y4m` video file.
|
||||||
|
- `SerializerRenderer`: dumps the full 3d simulation state to disk. parseable after the fact with [src/bin/viewer.rs](src/bin/viewer.rs).
|
||||||
|
- `CsvRenderer`: dumps the output of all measurements into a `csv` file.
|
||||||
|
|
||||||
|
historically there was also a plotly renderer, but that effort was redirected into developing the `src/bin/viewer.rs` tool better.
|
||||||
|
|
||||||
|
|
||||||
# Performance
|
# Performance
|
||||||
|
|
||||||
|
|
||||||
with my Radeon RX 5700XT, the sr\_latch example takes 125 minutes to complete 150ns of simulation time (3896500 simulation steps). that's on a grid of size 163x126x80 where the cell dimension is 20um.
|
with my Radeon RX 5700XT, the sr\_latch example takes 125 minutes to complete 150ns of simulation time (3896500 simulation steps). that's on a grid of size 163x126x80 where the cell dimension is 20um.
|
||||||
|
|
||||||
in a FDTD simulation, as we shrink the cell size the time step has to shrink too (it's an inverse affine relationship). so the scale-invariant performance metric is "grid cell steps per second" (`(163*126*80)*3896500 / (125*60)`): we get 850M.
|
in a FDTD simulation, as we shrink the cell size the time step has to shrink too (it's an inverse affine relationship). so the scale-invariant performance metric is "grid cell steps per second" (`(163*126*80)*3896500 / (125*60)`): we get 850M.
|
||||||
|
Reference in New Issue
Block a user