what is the advantage for Taichi compared to pytorch or tensorflow when running on GPU?
@Aspen, thanks for the great question!
A high level answer is that pytorch mainly targets ML applications while Taichi targets general high-performance numerical computation. One obvious difference is that torch users operates at Tensor level while taichi users operates at element level. Thus taichi allows finer control over each element and how the parallel computation are conducted. In torch your tensor operation might or might not be parallelized, depends on the implementation details.
Also another thing is that taichi is JIT compiled while torch eager mode is executed by python interpreter so that may have some impact on perf as well.
Finally torch and taichi are not really competitors but rather can be combined together to make some tasks simpler. We’re planning to release a few demos on how to integrate the two frameworks in the future, please stay tuned!
Hi @ailing. I am very interested in seeing the demos of using Taichi with pytorch. Please can you provide links to the demos when they have been released. Many thanks.
@Haydn @Aspen We’ve just released the first blog on this topic Taichi & Torch 01: Resemblances and Differences | Taichi Docs and we’re working on two more blogs (including a demo integrating taichi with torch), they should be available in the next 2 weeks!
Let us know if this is helpful! Thanks!
Thank you for releasing the two demos on torch and Taichi. I have found them very useful, especially the link to section Using external arrays as Taichi kernel arguments, which I had previously not seen when reading the documentation. Also, as mentioned in the blog, I will evaluate the performance gained from using layout=ti.Layout.AOS in my code.