As I went through the examples, I found that every kernel’s
grad() is called explicitly. Does this mean that auto diff in Taichi is inside kernel only and there is no tracing of gradient between different kernels?
You can make use of
ti.Tape to autodiff the whole program. E.g. https://github.com/yuanming-hu/difftaichi/blob/master/examples/diffmpm.py#L355
Thank you @yuanming ,
I saw this in docs.
Decouple computation from data structures
Domain-specific compiler optimizations
Two-scale automatic differentiation
Embedding in Python
Does “Two-scale automatic differentiation” here consists of in-kernel auto diff and
Yes - haven’t got a chance to document this…