The neural network seems mostly to be used on Computer Vision and Natural Language Processing scenarios, while tree-models like GBDT are mainly used for tabular data.
Although this article tries to give an explanation of this, it hasn’t been so promising to me. In my humble opinion, the neural network could finally surpass, or at least be competitive, to the GBDT model.
For example, the paper <TabNet: Attentive Interpretable Tabular Learning> describe a Transformer-like model to simulate the tree-model. The PyTorch implementation is here. I have used it on our own data and it finally reached 90% accuracy ( the accuracy of LightGBM is 93%). In spite of the lower accuracy, this is the first neural model reached 90% accuracy when used on our private data. The author has already done a great job.