Linking pages
- Advancements in machine learning for machine learning – Google Research Blog https://blog.research.google/2023/12/advancements-in-machine-learning-for.html 151 comments
- JAX Quickstart — JAX documentation https://jax.readthedocs.io/en/latest/notebooks/quickstart.html 143 comments
- #034 José Valim reveals Project Nx - Thinking Elixir https://thinkingelixir.com/podcast-episodes/034-jose-valim-reveals-project-nx/ 105 comments
- GitHub - google/jax: Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more https://github.com/google/jax 99 comments
- Top minds in machine learning predict where AI is going in 2020 | VentureBeat https://venturebeat.com/2020/01/02/top-minds-in-machine-learning-predict-where-ai-is-going-in-2020/ 84 comments
- Failing to Reach DDR4 Bandwidth by Unum https://unum.cloud/post/2022-01-29-ddr4/ 71 comments
- Google Research: Themes from 2021 and Beyond – Google AI Blog https://ai.googleblog.com/2022/01/google-research-themes-from-2021-and.html 52 comments
- tensorflow/RELEASE.md at master · tensorflow/tensorflow · GitHub https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md 37 comments
- Training Deep Networks with Data Parallelism in Jax https://www.mishalaskin.com/posts/data_parallel 37 comments
- Nx (Numerical Elixir) is now publicly available - Dashbit Blog https://dashbit.co/blog/nx-numerical-elixir-is-now-publicly-available 31 comments
- GPU Array Languages, Compiler & Libraries – The code_report Blog – A blog about programming languages and more https://codereport.github.io/GPUArrayLanguages/ 14 comments
- Hardware for Deep Learning. Part 4: ASIC | by Grigory Sapunov | Intento https://blog.inten.to/hardware-for-deep-learning-part-4-asic-96a542fe6a81 8 comments
- GitHub - mikeroyal/Neuromorphic-Computing-Guide: Learn about the Neumorphic engineering process of creating large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures. https://github.com/mikeroyal/Neuromorphic-Computing-Guide 7 comments
- Why are ML Compilers so Hard? « Pete Warden's blog https://petewarden.com/2021/12/24/why-are-ml-compilers-so-hard/ 6 comments
- Guide | TensorFlow Core https://www.tensorflow.org/programmers_guide/meta_graph 5 comments
- Modular: The world's fastest unified matrix multiplication https://www.modular.com/blog/the-worlds-fastest-unified-matrix-multiplication 4 comments
- Run Your Own DALL·E Mini (Craiyon) Server on EC2 | by Meadowrun | Medium https://medium.com/@meadowrun/run-your-own-dall-e-mini-craiyon-server-on-ec2-e8aef6f974c1 2 comments
- GitHub - mikeroyal/Machine-Learning-Guide: Machine learning Guide. Learn all about Machine Learning Tools, Libraries, Frameworks, and Training Models. https://github.com/mikeroyal/Machine-Learning-Guide 2 comments
- Doing it the Hard Way: Making the AI engine and language 🔥 of the future — with Chris Lattner of Modular https://www.latent.space/p/modular 2 comments
- BFloat16: The secret to high performance on Cloud TPUs | Google Cloud Blog https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus 1 comment
Linked pages
- TensorFlow http://tensorflow.org/ 440 comments
- GitHub - google/jax: Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more https://github.com/google/jax 99 comments
- Google Developers Blog: XLA - TensorFlow, compiled https://developers.googleblog.com/2017/03/xla-tensorflow-compiled.html 28 comments
- GitHub - elixir-nx/nx: Multi-dimensional arrays (tensors) and numerical definitions for Elixir https://github.com/elixir-nx/nx 10 comments
- PTX ISA :: CUDA Toolkit Documentation https://docs.nvidia.com/cuda/parallel-thread-execution/index.html 4 comments
- GitHub - pytorch/xla: Enabling PyTorch on Google TPU https://github.com/pytorch/xla 0 comments
Related searches:
Search whole site: site:www.tensorflow.org
Search title: XLA: Optimizing Compiler for Machine Learning | TensorFlow
See how to search.