The definition is @ti.kernel def nn1(f: ti.i32, t: ti.i32): # hidden 1. f is a param from the function calling it and t is the current index in for loop.
The call stack goes like this: File "D:\anaconda3\lib\site-packages\taichi\lang\kernel.py", line 287, in __call__ return self.compiled_functions[key](*args) File "D:\anaconda3\lib\site-packages\taichi\lang\kernel.py", line 243, in func__ raise KernelArgError(i, needed, provided) taichi.lang.kernel.KernelArgError: (0, DataType.int32, <class 'taichi.lang.expr.Expr'>)
I substitute both params for zero and get similar errors that only differ on 0 and 1.
I am not sure if it was some kind of AST stuff, but I suppose the kernel requires me to pass in two integer parameters and both of them are expression now. Could you tell me what should I do to convert the type(perhaps explicitly)?
I am reading more example code. One of the possible reasons might be the neural kernel is called in taichi scope while in rigid body and other examples nn s are usually called in Python scope. Is it possible to nest multiple kernels together with such network?
I have merged them into one kernel p2g (to replace the determinant) and I find the backward p2g_0__grad cannot compile.
I suppose it is some kind of auto grad problem with inappropriate local variables.
The error goes like this [E 12/17/19 16:17:16.965] [make_adjoint.cpp:taichi::Tlang::MakeAdjoint::visit@274] Not Implemented. [E 12/17/19 16:17:16.965] Received signal 22 (SIGABRT)
which hits void visit(LocalStoreStmt *stmt) override{TC_NOT_IMPLEMENTED}
What can I do to amend this
Sorry about that. Could you share the code with me via email (yuanmhu@gmail.com) so that I can take a look? There are some constraints on Taichi kernels to be differentiable.