Google Chat enhances automation with IFTTT integration; Pixel 9 introduces adaptive touch for improved wet screen sensitivity

Key Points:
- Google Chat now supports IFTTT for seamless automation across popular apps.
- Pixel 9’s Adaptive Touch feature improves screen sensitivity in challenging conditions.
- Both updates are designed to enhance user experience across personal and enterprise settings.
Google Chat has expanded its functionality by integrating with IFTTT (If This, Then That), a powerful automation tool that connects with over 1,000 popular apps. This integration is available for both Google Workspace and personal Google accounts, making it easier for users to automate various tasks within Google Chat.
As revealed at Cloud Next 2024, the IFTTT integration offers a range of triggers, such as detecting new messages or members in a space, and corresponding actions like posting to a space or updating space descriptions. This is particularly useful for teams and enterprise accounts, allowing them to set up workflows like sending onboarding emails to new space members or notifying a space when new files are added to cloud storage services like Google Drive or Dropbox.
Additionally, the integration allows users to stay informed by automatically posting mentions or tags from platforms like Discord, X (formerly Twitter), and Facebook to a designated space in Google Chat. This feature is now accessible to all Google Workspace customers, Workspace Individual Subscribers, and personal Google account users.
Other Google Workspace services, including Gmail, Drive, Docs, Sheets, Calendar, Contacts, and Tasks, are also compatible with IFTTT, further expanding the automation possibilities within the Google ecosystem.
Pixel 9’s Adaptive Touch Enhances Screen Sensitivity
Google’s latest smartphone, the Pixel 9 series, introduces a new feature called Adaptive Touch, designed to improve touchscreen sensitivity in challenging conditions. Whether the screen is wet, covered by a screen protector, or subjected to other factors that might impede functionality, Adaptive Touch automatically adjusts the screen’s sensitivity to optimize performance.
This feature can be found in the Pixel 9’s settings under Display > Touch Sensitivity, alongside the existing “Screen protector mode.” Adaptive Touch was initially tested in Android beta releases earlier this year, but the Pixel 9 series marks its official launch.
According to tests conducted by Android Authority, the Pixel 9’s screen performs significantly better with Adaptive Touch enabled compared to the Pixel 8 Pro, particularly when used with wet fingers. This improvement suggests that Google has made considerable strides in enhancing the overall user experience for its latest devices. Adaptive Touch is enabled by default on all Pixel 9 models, ensuring users benefit from this feature right out of the box.
Google brings new AI tools for learning languages and adds more features to Wallet and NotebookLM

Google is rolling out several new AI-powered features to help users learn languages, manage documents, and store IDs more easily.
First, Google is introducing new language learning tools through its Search app. These tools use AI to give users personalized lessons to practice speaking and listening in Spanish. English speakers in the U.S. can try these lessons, which include feedback and daily reminders. Google plans to expand support for more languages and regions soon.
Next, NotebookLM — Google’s AI note-taking tool — is getting smarter. It now offers “audio overviews,” where users can listen to summaries of their notes. This feature works in English, but Google says more languages like Spanish, Hindi, and Japanese will be added later this year. NotebookLM can also now answer questions based on user documents in more languages.
Lastly, Google Wallet is becoming more useful in the UK. People can now add digital versions of their UK passports for identity verification. This feature, in partnership with the UK government, helps users quickly confirm their identity online for government services.
With these updates, Google continues to blend AI into everyday tools, making learning, organizing, and identification easier and more accessible for users worldwide.
Android
Google apps and Android Auto get fresh looks and smart updates

Google is bringing a cleaner and more modern design to many of its apps with the latest Material You changes. Apps like Google Calendar, Contacts, and others now have rounded corners, better spacing, and improved colors. These small updates make the apps look more polished and easier to use, especially on tablets and foldable phones. Google is quietly adding these updates through server-side changes, so users don’t need to download anything extra.
At the same time, Android Auto is also getting a helpful new feature. Google is testing built-in climate controls that let you adjust your car’s temperature and fan settings directly from the Android Auto screen. A demo shown by Google includes a new “Climate” button on the screen, which opens controls like temperature, fan speed, and even seat heaters.
This update is meant to reduce distractions while driving, as drivers won’t need to switch between different screens or reach for physical buttons. Google says the feature will work on cars that already support digital climate control systems.
Together, these changes show how Google is making its software not just prettier, but smarter and more user-friendly. Whether you’re checking your calendar or driving to work, these improvements are designed to make daily tasks easier and safer.
Gemini app gets new look and better controls for switching ai models

Google is giving its Gemini app a fresh new design on Android to make things easier for users. The biggest change is how you switch between different Gemini AI models like Gemini 1.5 Pro and Gemini 1.0 Pro.
In the old version of the app, you had to dig into the settings menu to switch models, which wasn’t very convenient. But now, Google has made it simpler by adding a new button right below the chat box. This lets you quickly choose the AI model you want to use. You’ll also see a short note explaining what each model is good at, helping you decide which one to use.
The new layout also moves the microphone and image upload buttons. They now sit to the left of the text input bar, which makes the bottom of the screen look cleaner and more organized.
These updates seem to be rolling out slowly, so not everyone will see them right away. However, it looks like Google is testing these changes before launching them more widely.
With this update, Google is trying to make the Gemini app more user-friendly and give people more control over how they interact with its AI tools. It’s a small but helpful step in improving the overall experience.
-
Apps1 year ago
Gboard Proofread feature will support selected text
-
News1 year ago
Samsung USA crafting One UI 6.1.1
-
News1 year ago
Breaking: Samsung Galaxy S22 may get Galaxy AI features
-
News1 year ago
Samsung Galaxy S23 Ultra with One UI 6.1 and all S24 AI features revealed
-
News1 year ago
One UI 6.1 Auracast (Bluetooth LE Audio) feature coming to many Samsung phones
-
News1 year ago
Satellite SOS feature coming to Google Pixel phones, evidence leaked
-
Apps11 months ago
Google’s fancy new Weather app is finally available for more Android phones
-
News1 year ago
Google Pixel evolves as Europe’s third best selling flagship