[HN Gopher] Hummingbird: Compile trained ML models into tensor c... ___________________________________________________________________ Hummingbird: Compile trained ML models into tensor computation Author : polm23 Score : 32 points Date : 2020-06-07 01:43 UTC (21 hours ago) (HTM) web link (github.com) (TXT) w3m dump (github.com) | vladf wrote: | This is an interesting idea, with the main non-trivial win really | being vectorized GBDT inference. Instead of serially going down a | DT, you can convert it to vectorizable GEMM code. | | By a cursory look at | hummingbird/ml/operator_converters/_tree_commons.py this seems to | just be doing a GraphBLAS-style graph traversal with dense GEMM, | which strikes me as resulting in incredibly redundant | computation. I think this would only give you acceleration for | very small trees (edit: granted, I think that's what the defaults | are for a lot of these packages). | | I'd be interested in seeing how this stacks up against: | | * native GPU execution for XGB/LGBM | | * a _sparse_ GEMM implementation of their algorithm | | * the classical reference for DT vectorization, Quickscorer, see | https://github.com/hpclab/quickscorer | aasasd wrote: | You'd think that a machine-learning thing would be called ' | _mocking_ bird'. | 1337shadow wrote: | Nothing to do with the Hummingbird notation, if anyone is | wondering. | | https://www.hummingbirdnotation.com/ | fxtentacle wrote: | How is this different from replacing numpy with cupy? | KorfmannArno wrote: | Is anyone else interested in reading groups for the open-source | book D2L.AI? ___________________________________________________________________ (page generated 2020-06-07 23:00 UTC)