Hacker News
- LSTM for Human Activity Recognition https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition/ 7 comments
Linking pages
- GitHub - aymericdamien/TopDeepLearning: A list of popular github projects related to deep learning https://github.com/aymericdamien/TopDeepLearning 35 comments
- Analyzing my Google Location History | by Kshitij Mathur | Towards Data Science https://towardsdatascience.com/analyzing-my-google-location-history-d3a5c56c7b70?source= 26 comments
- GitHub - astorfi/TensorFlow-World: Simple and ready-to-use tutorials for TensorFlow https://github.com/astorfi/TensorFlow-World#5 16 comments
- GitHub - instillai/TensorFlow-Course: Simple and ready-to-use tutorials for TensorFlow https://github.com/open-source-for-science/TensorFlow-Course 6 comments
- GitHub - easy-tensorflow/easy-tensorflow: Simple and comprehensive tutorials in TensorFlow https://github.com/easy-tensorflow/easy-tensorflow 6 comments
- GitHub - jtoy/awesome-tensorflow: TensorFlow - A curated list of dedicated resources http://tensorflow.org https://github.com/jtoy/awesome-tensorflow 5 comments
- GitHub - guillaume-chevalier/Awesome-Deep-Learning-Resources: Rough list of my favorite deep learning resources, useful for revisiting topics or for reference. I have got through all of the content listed there, carefully. - Guillaume Chevalier https://github.com/guillaume-chevalier/awesome-deep-learning-resources 1 comment
- GitHub - guillaume-chevalier/Spiking-Neural-Network-SNN-with-PyTorch-where-Backpropagation-engenders-STDP: What about coding a Spiking Neural Network using an automatic differentiation framework? In SNNs, there is a time axis and the neural network sees data throughout time, and activation functions are instead spikes that are raised past a certain pre-activation threshold. Pre-activation values constantly fades if neurons aren't excited enough. https://github.com/guillaume-chevalier/Spiking-Neural-Network-SNN-with-PyTorch-where-Backpropagation-engenders-STDP 1 comment
- TensorFlow-World/README.rst at master · astorfi/TensorFlow-World · GitHub https://github.com/astorfi/TensorFlow-World/blob/master/README.rst 0 comments
- GitHub - instillai/TensorFlow-Course: Simple and ready-to-use tutorials for TensorFlow https://github.com/instillai/TensorFlow-Course 0 comments
- Some Reasons Why Deep Learning has a Bright Future – Neuraxio https://www.neuraxio.com/en/blog/deep-learning/2019/12/29/why-deep-learning-has-a-bright-future.html 0 comments
- Spiking Neural Network (SNN) with PyTorch: towards bridging the gap between deep learning and the human brain https://guillaume-chevalier.com/spiking-neural-network-snn-with-pytorch-where-backpropagation-engenders-stdp-hebbian-learning/ 0 comments
- GitHub - ujjwalkarn/Machine-Learning-Tutorials: machine learning and deep learning tutorials, articles and other resources https://github.com/ujjwalkarn/Machine-Learning-Tutorials 0 comments
- GitHub - guillaume-chevalier/seq2seq-signal-prediction: Signal forecasting with a Sequence-to-Sequence (seq2seq) Recurrent Neural Network (RNN) model in TensorFlow - Guillaume Chevalier https://github.com/guillaume-chevalier/seq2seq-signal-prediction 0 comments
- GitHub - ChristosChristofidis/awesome-deep-learning: A curated list of awesome Deep Learning tutorials, projects and communities. https://github.com/ChristosChristofidis/awesome-deep-learning 0 comments
- LSTMs for Human Activity Recognition - Guillaume Chevalier's Blog https://guillaume-chevalier.com/lstms-for-human-activity-recognition/ 0 comments
- GitHub - guillaume-chevalier/GloVe-as-a-TensorFlow-Embedding-Layer: Taking a pretrained GloVe model, and using it as a TensorFlow embedding weight layer **inside the GPU**. Therefore, you only need to send the index of the words through the GPU data transfer bus, reducing data transfer overhead. https://github.com/guillaume-chevalier/GloVe-as-a-TensorFlow-Embedding-Layer 0 comments
- GitHub - guillaume-chevalier/Linear-Attention-Recurrent-Neural-Network: A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN) https://github.com/guillaume-chevalier/Linear-Attention-Recurrent-Neural-Network 0 comments
- GitHub - instillai/TensorFlow-Course: Simple and ready-to-use tutorials for TensorFlow https://github.com/osforscience/TensorFlow-Course#4 0 comments
- GitHub - asetinUL/Awesome-Asetin https://github.com/asetinUL/Awesome-Asetin 0 comments
Linked pages
- The Unreasonable Effectiveness of Recurrent Neural Networks https://karpathy.github.io/2015/05/21/rnn-effectiveness/ 434 comments
- [1609.08144] Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation http://arxiv.org/abs/1609.08144 97 comments
- Quora - Quora, wo man Wissen miteinander austauscht und die Welt besser verstehen kann http://www.quora.com 43 comments
- GitHub - astorfi/TensorFlow-World: Simple and ready-to-use tutorials for TensorFlow https://github.com/astorfi/TensorFlow-World#5 16 comments
- GitHub - jtoy/awesome-tensorflow: TensorFlow - A curated list of dedicated resources http://tensorflow.org https://github.com/jtoy/awesome-tensorflow 5 comments
- All Sites - Stack Exchange https://stackexchange.com/sites 2 comments
- GitHub - guillaume-chevalier/Awesome-Deep-Learning-Resources: Rough list of my favorite deep learning resources, useful for revisiting topics or for reference. I have got through all of the content listed there, carefully. - Guillaume Chevalier https://github.com/guillaume-chevalier/awesome-deep-learning-resources 1 comment
- GitHub - guillaume-chevalier/Linear-Attention-Recurrent-Neural-Network: A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN) https://github.com/guillaume-chevalier/Linear-Attention-Recurrent-Neural-Network 0 comments