Google not only develops artificial intelligence chips for its own data centers, but also develops them for third-party products.
Last year, Google said its AI silicon is gaining more and more strategic importance.  In AI, researchers train models with lots of data so that machines can make predictions when new data arrives. The first version of the TPU could only make these predictions, while the second version in 2017 could also be used to train models – an update that let the chips compete with Nvidia's graphics cards.
Third Generation TPU Announced Earlier This Year
Now we have Edge TPUs, tiny chips specifically designed for the predictive part of AI, which is less computationally intensive than training models. Edge TPUs can do their calculations without having to connect to a bunch of powerful computers, so applications can work faster and more reliably. You can handle the AI work alongside a standard chip or microcontroller in a sensor or gateway device.