TensorFlow is a cross-platform. It works on almost anything: GPUs and CPUs, including mobile and embedded platforms, and even tensor processing units (TPUs), which are special hardware for performing tensor mathematics, are not yet widely used, but recently an alpha program was launched .
Many devices supported in TensorFlow provide a high-performance core implemented for the TensorFlow platform inC++.
It then provides a simpler interface for common layers in python and C++ deep learning models. On top of that, it creates high-level APIs, including this one.
TensorFlow execution model
With Machine learning, you can export models you develop to multiple platforms.
You can create a calculation chart in the current version of TensorFlow. It is a data structure that fully defines the calculation you want to perform in the chart, and there are many advantages to this:
- The graphics runs immediately and is saved for later use and can run on multiple platforms: CPUs, GPUs, TPUs can export without relying on any of the code that makes up the mobile and embedded chart.
- It can be converted and optimized, because the chart can be converted to produce a version that is more suitable for a particular platform. In addition, memory or computational optimizations can be made and exchanged between them. This is useful, for example, in supporting faster mobile inference after training on larger machines.
TensorFlow's high-level APIs, together with computational graphs, provide a rich and flexible development environment in the same framework.
Performance and benchmarking
There is a section on the TensorFlow site that contains information, especially for performance-oriented developers. Optimization can usually be model-specific, but there are generally some general guidelines that can make a big difference.
Click here to learn more about TensorFlow