Dharamshala, 7th August: Tensor is a custom-built System on a Chip (SoC) developed by Google to power Pixel phones. “We’re ecstatic to show off our new custom Google Tensor chip, which has taken four years to develop (for scale)! Tensor is our most significant Pixel innovation yet, and it is based on our two decades of computing knowledge. In the fall, we’ll be on the Pixel 6 and Pixel 6 Pro,” CEO Sundar Pichai tweeted.
Google has joined the in-house SoC club, following Apple (M1), Huawei (Kirin), and Samsung (Exynos). Tensor has several advantages, including:
- More computer power: To capture photographs, Google’s Pixel combines computational photography and machine learning (Night Sight, for example). For its devices, the tech behemoth has created strong speech, recognition models. For optimal performance, the features necessitate great processing power and minimal latency. Pixel smartphones could benefit from Tensor’s complicated AI advances.
- Developing new AI capabilities: Google can use Tensor chips to add new machine learning features without worrying about performance. To execute significant AI tasks, powerful CPUs are required.
- Tensor’s new security core will work in tandem with Titan M2 to give another degree of security to hardware security. Google’s Titan M chip is a custom-built chip that protects sensitive data like passcodes, enables encryption, and ensures secure app transactions.
Pixel users now have personalized features like notification shade, volume control, and lock screen thanks to the recent release of Android 12’s first beta version. New features in Android 12 give users more control over how much private information apps can access by providing more transparency into “which apps are accessing what data.”
The processor determines how well the phone performs and how long it lasts. Despite having Android as its operating system, Google has been unable to make a difference in the smartphone market. The Mountain View behemoth is hoping to revitalize its smartphone division with the all-new Tensor chips. Google has recently made a slew of advancements in the fields of artificial intelligence and machine learning.
LaMDA from Google: Google’s Language Model for Dialogue Application is based on Transformer, a Google Research-developed neural network architecture that is similar to many contemporary language models such as BERT and GPT-3. It’s currently trained on text, but it might be used in Conversational AI, Google Maps, and other areas in the future.
AI in Google Maps: Eco-Friendly routes, which suggest fuel-efficient routes to users, and Safer Routing, which takes into account real-time weather and traffic conditions, are two new AI features in Google Maps.
Vertex AI: It’s a managed machine learning platform that allows you to deploy and maintain AI models. Users can leverage pre-trained and custom tooling within a unified AI platform to quickly develop, deploy, and scale machine learning models. It also works well with other open-source frameworks such as TensorFlow, sci-kit-learn, and PyTorch.
Little Patterns: Google Photographs now has a new function that uses machine learning to convert photos into numbers, which it then examines for aesthetic and conceptual resemblance.
MUM: Multitask Unified Model is a novel AI algorithm that is learned in 75 different languages and is based on a Transformer architecture. MUM is capable of comprehending information in the form of text and graphics, with the capacity to expand to audio and video in the future.
Image : google
Leave a Reply