TensorFlow is a cross-platform. It works on almost everything: tenor processing units (TUs), which are special equipment for making gPs and CPUs including mobile and embedded platforms, and even tenor mathematics, are not yet widely used, but recently an alpha program has been launched.
Many devices supported in TensorFlow provide a high-performance core applied to the TensorFlow platform in C++ .
On top of this, Python and C++ provide a simpler interface for commonly used layers in deep learning models. On top of that, it creates top-level APIs, including this one.
TensorFlow execution model
With Machine learning, you can export models to multiple platforms.
You can create a calculation chart in the current version of TensorFlow. A data structure that fully defines the calculation you want to perform in a chart, and there are many advantages to this:
- The chart runs immediately and can work on multiple platforms and can run on multiple platforms: CPUs, GPUs, TPUs can export without depending on any of the codes that make up the mobile and embedded graphic.
- The graphics can be converted and optimized because the chart can be converted to produce a more suitable version for a particular platform. Memory or computation optimizations can also be made and exchanged between them. This example is useful for supporting faster mobile interest after training on larger machines.
TensorFlow's high-level APIs provide a rich and flexible development environment in the same frame, along with calculation charts.
Performance and benchmarking
The TensorFlow site has a section that contains information specifically for performance-oriented developers.
Click here to learn more about TensorFlow