Does Auto Diff only works inside one kernel?

Hello community,
As I went through the examples, I found that every kernel’s grad() is called explicitly. Does this mean that auto diff in Taichi is inside kernel only and there is no tracing of gradient between different kernels?
Thank you.

Hi @jackal

You can make use of ti.Tape to autodiff the whole program. E.g.

Thank you @yuanming ,
I saw this in docs.

Design decisions
Decouple computation from data structures
Domain-specific compiler optimizations
Two-scale automatic differentiation
Embedding in Python

Does “Two-scale automatic differentiation” here consists of in-kernel auto diff and tape machinism?

Yes - haven’t got a chance to document this…