Every time I try a new AI feature on a smartphone, I ask one constant question: Am I going to use it every day? I wonder if software engineers, developers, and the world’s biggest smartphone brands and software companies had the same idea when designing and marketing these features. Well, I don’t think that’s the case – at least, judging by the AI-centric features currently available in smartphones. And, I do not believe that I am wrong in my observation.
Since the beginning of the year, I’ve probably used every smartphone from every major brand marketed for next-gen artificial intelligence capabilities and tested beta versions of AI features before they were made available to consumers. By the end of the year, I honestly lost track of how many AI features a smartphone has. It’s hard to keep track, for someone like me who covers this space closely – let alone consumers trying to remember updates on their smartphones.
And that’s where the trouble really begins. There’s been a lot of emphasis on AI and its branding, but in reality, the first wave of AI-powered features make the smartphone experience different from the phones I used last year. It makes me wonder if the AI-centric features that are so heavily marketed—almost shoved in our faces—are an afterthought rather than a way to seamlessly integrate them and improve the user experience, which has been stagnant over the past few years. And the most personal devices we use are crying out for improvements in smartphones.
Although tech companies have made it clear that the rollout of AI features is the first generation and just beginning – and I appreciate their honesty – they can’t sell smartphones on the promise of something they openly admit. I understand the pressure on tech companies (and engineers) to bring generative artificial intelligence—a type of AI that can ingest and analyze visual information at scale and produce new text and video content with simple prompts—to smartphones as quickly as possible. However, the problem is that apart from two or three features, none of the AI ​​functionality in the smartphone really stands out.
My view of tech companies and smartphone brands might be softer if they weren’t marketing AI as groundbreaking and being brutally honest about what consumers should expect. No matter how hard tech companies try to position AI as “helpful” and “life-changing,” it feels more like a sidekick than something that truly enhances the everyday smartphone experience.
Take the case of writing tools like those in Apple Intelligence, which help you rewrite, proofread, or summarize text in Google Messages, or Gemini. Both AI features are marketed as tools to fix typos, improve grammar, and make your writing sound more “professional.” But honestly, when I text someone, I rarely worry that I’m making a typo or need to explain a sentence. We all make typos and errors – this is how free-flowing conversation should be.
Frankly, it would be great if someone felt the need to correct grammar or typos before every conversation. It’s similar to how Instagrammers and creators use touch ups and filters before posting photos on social media. You don’t need to be a grammar nerd or use expressive language to communicate. Perhaps such tools can be useful in a work-related context—like drafting an email to a client—but they’re probably an everyday convenience. Now imagine using these AI writing tools to impress HR and your hiring manager with your writing skills, landing the job, only for everyone to realize how great of a communicator you are. This would be a disastrous situation for the company.
My biggest gripe with writing tools is that, although Apple likes to emphasize how deeply these features are embedded in the operating system, the reality is that it’s impossible to remember they exist because they only appear when words are highlighted. I wish the feature was better integrated – perhaps directly into the keyboard.
The other AI features aren’t impressive either – perhaps a bit more gimmicky and even less useful. Google and Apple both offer image generators – Google has Pixel Studio preloaded on the Pixel 9 series, and Apple has something called Image Playground as part of iOS 18.2. I used both AI features extensively, and I find them fun and funny, which makes me question their purpose and how useful they really are. Of course, they are fun and harmless, but they don’t have a permanent impression, and the initial excitement comes out a week after the test.
For me, though, the two features that have stayed with me the longest are Google’s Circle to Search and Apple’s Visual Intelligence. Both AI features seem designed for a mobile-first experience, and both excel at what they’re really designed to do. Circle Two Search started rolling out on the Samsung Galaxy S24 and quickly found its way onto almost every new Android smartphone. The idea of ​​finding information by simply rolling your finger over an item to get a match from the web is pure genius. It’s sort of an extension of Google Search, but with a heavy focus on visuals, allowing you to get context in something you see. Every time I use the AI ​​tool it works great — and the fact that it allows you to circle, highlight or tap on your screen and search on Google. It’s a simple idea that takes advantage of both software and hardware.
I liked visual intelligence for the same reason. Visual Intelligence requires using iPhone 16’s new dedicated camera hardware button to scan the world around you, making it perfect for quickly pulling up information on the go. I used this feature in every city I visited recently – Shenzhen, New York and Colombo – and it paves the way for how we can use iPhone in the future. It’s a well-thought-out, well-executed feature that, again, is designed for the iPhone.
Not that I don’t see potential in AI. All I’m saying is that current generation AI features, or mobile-centric AI features, are few and far between. It seems that tech companies are throwing AI into smartphones without knowing what will work as an everyday experience or what features will mesh well with the user interface and mobile experience. The fact that Circle Two Search and Visual Intelligence work is that they’re optimized for mobile, but I can’t say the same for every AI-infused feature available on smartphones. Some may work individually, but overall, they lack purpose in smartphones.
I think what Meta did with the Ray-Ban smart glasses should be a benchmark for what AI can do when it fits the form factor well. I was amazed at how Meta was able to do the right things by offering AI-first features like Live AI and translation, but didn’t slouch in execution.
At a time when smartphone companies are running out of ideas and phones are getting boring, AI should be a beacon of new hope, but now it seems to be everywhere. Some AI-based features are great, like Magic Editor and real-time transcribing capabilities, while others have great potential, like Apple Intelligence’s message and notification summaries. But overall, modern smartphones and their user interfaces still feel the same.
Just cluttering the interface with AI may not work; Instead, AI needs to be woven into the UI to elevate the experience. Perhaps, AI taking action for you — which these tech companies have long promised with virtual agents — could make smartphones different from where we stand now. I don’t think tech companies will slow down or pause; I think the standards and expectations will only get higher in 2025. I just hope we don’t get to the same place with the Metaverse.
Why should you buy our membership?
You want to be the smartest in the room.
You want access to our award-winning journalism.
You don’t want to be confused and misinformed.
Choose your subscription package