Apple’s second-largest acquisition isn’t about the iPhone, but entering a market it hasn’t set foot in yet | Technology News

Apple generally shies away from multi-billion-dollar acquisitions, unlike its peers in Silicon Valley, who are known for scooping up smaller rivals or, at times, even legacy companies at astronomical prices.
However, last week Apple bought Israeli startup Q.ai. While the purchase price was not disclosed, some suggest it could be around $2 billion. If that figure is accurate, it would make the deal Apple’s second-biggest acquisition in years, after headphone maker Beats. The Cupertino-based company’s biggest acquisition to date remains Beats, which it acquired in 2014 for $3 billion.
One wonders what made Apple agree to buy Q.ai, a relatively unknown company. Before jumping to the conclusion that Apple is far behind in Artificial Intelligence and is trying to catch up with rivals, it’s worth noting that Q.ai isn’t even a traditional “AI” company. So what exactly does the startup do? Does it even have a product on the market, or is it working on something that hasn’t been released yet?
I tried to figure out what Q.ai might be working on and the role the startup could play at Apple in the development of upcoming products.
Q.ai listens to ‘silent speech’
The name Q.ai may give the impression that it is an artificial intelligence company, and understandably so, but it isn’t. In fact, the Israeli company works in human–computer interaction technology. Like Apple, Q.ai is secretive and does not disclose its product pipeline, leaving much to speculation about what it is working on.
Its website says Q.ai is developing a new audio technology, but further digging suggests its primary research and development may be focused on technologies that read facial movements and interpret silent communication.
Apple Vision Pro. (Image: Anuj Bhatia/The Indian Express)
From what I understand, Q.ai has developed technology based on machine-learning algorithms that analyse facial muscles and micro-expressions when people speak, allowing the system to interpret silent communication and convert it into specific inputs or control instructions. Essentially, Q.ai appears to be working on a new form of human–computer interaction powered by machine learning. While it does fall under the broader AI umbrella, it is distinct from generative AI.
Story continues below this ad
As I mentioned, its website is currently bare, offering no technical details about the technology it is working on, its potential applications, or any future products under development.
However, one thing is clear: Q.ai is not an “AI company” in the traditional sense, like OpenAI or xAI. Whatever Q.ai is developing is likely to be embedded at the chip level and eventually launched in a future Apple product (more on what that could be later).
Since Johny Srouji, Senior Vice President (Hardware Technologies), Apple, confirmed the acquisition to Reuters, it’s reasonable to assume he played a key role in bringing Q.ai’s technology to Apple. After all, Srouji is the mastermind behind Apple’s silicon, and the processor is the foundation of any modern tech product.
Q.ai co-founder is an old friend of Apple
What makes the deal particularly interesting, especially given Apple’s involvement, is the team behind Q.ai. One also wonders why Apple reportedly paid around $2 billion for the startup. Q.ai is estimated to be valued between $1.6 billion and $2 billion and was previously backed by Kleiner Perkins, Google Ventures (GV), Spark Capital, and the Exor Group. It turns out Q.ai was led by CEO and co-founder Aviad Maizels, who previously sold PrimeSense to Apple in 2013.
Story continues below this ad
Maizels and Apple have a history dating back to PrimeSense, the Israeli company founded by Michael Shpigelmacher, Aviad Maizels, and Alex Shpunt. PrimeSense shot to fame as a leader in 3D sensors, powering the Microsoft Kinect for Xbox 360 and providing the technology behind Apple’s Face ID. Remember, the iPhone X was the first iPhone to use a facial recognition system to identify its owner, rather than relying on a fingerprint-based one.
The iPhone is still the king of smartphones, even with a broken Apple Intelligence. (Image: Anuj Bhatia/The Indian Express)
The 3D technology PrimeSense developed was based on the principle of static structured light, and the team advanced it into a dynamic structured light system called “Light Coding.” For the first time, PrimeSense demonstrated a prototype of its 3D sensing system at the Game Developers Conference (GDC) in San Jose, catching the attention of Microsoft, which was seeking breakthrough technology for its upcoming game consoles.
Microsoft then adopted PrimeSense’s innovative 3D technology, resulting in Kinect, the motion-sensing accessory for the Xbox 360, in 2010. Kinect recognised players’ movements and voices, and enabled users to interact with their existing Xbox 360s and some games without a handheld controller. It felt magical. Kinect was a huge success and significantly outperformed the sensing technology used by Nintendo in its Wii console.
However, the partnership between the two companies didn’t last long. In 2013, when Microsoft debuted the Xbox One, it abandoned PrimeSense’s dynamic structured light solution and shifted the second-generation Kinect to its own 3D time-of-flight (ToF) technology. This opened the door for Apple, which acquired PrimeSense for around $350 million in 2013. In fact, the acquisition of PrimeSense was its second-largest purchase after the fingerprint technology firm AuthenTec and the chipmaker PA Semi.
Story continues below this ad
With PrimeSense now part of Apple, Maizels joined the company and continued to advance human–computer interaction technology. He also co-founded Bionaut Labs, a medical startup that developed a tiny robot to treat brain diseases. In 2022, Maizels, who had become the senior director of Apple’s hardware and technology department, announced his departure and founded Q.ai.
Apple’s different take on AI and focusing on its ‘strengths’
Apple keeps buying smaller startups; in fact, it acquires hundreds of them every year. However, the main difference between Apple and other heavyweights in Silicon Valley is that Apple looks for specific technologies it doesn’t have in-house expertise in and brings in the teams behind them. These teams are then aligned with Apple’s ethos and integrated into Apple products.
Ray Ban-Meta Display glasses. (Image: Anuj Bhatia/The Indian Express)
While many are questioning Apple’s choice of Q.ai, which doesn’t have recurring revenue or a product, over a well-established company with AI expertise, the truth is that Apple inked a deal with Google earlier this month to have its Gemini models power some Apple Intelligence features. This should help enhance Apple Intelligence and make Siri, its voice assistant, much better and more useful.
But remember, Apple is not an AI company like OpenAI or Google, and there’s no need for it to compete with them. Apple is traditionally a hardware company that also designs the software running on its products. And, well, Apple’s latest quarterly results and record-high earnings prove that the company doesn’t have to worry about the “AI narrative” that has been attached to its image. The iPhone is still the king of smartphones, even with a broken Apple Intelligence.
Story continues below this ad
An eye on new form of ‘interaction’ on smart glasses
However, for Apple, what matters most is improving its products while setting aside the AI buzz. This can only happen if Apple develops new technologies years in advance, and introduces them in products that are still in development and have not yet hit the market. It’s an old strategy Apple has successfully used, whether by introducing the mechanical scroll wheel on the iPod to control the interface or by introducing the iPhone’s multi-touch interface.
With the Q.ai acquisition, Apple appears to be following the same strategy. This time, potentially enhancing audio in forthcoming smart glasses and advanced versions of AirPods. Using AI and optical sensors, Q.ai can detect micro-movements of the lips, jaw, and facial muscles, enabling speech recognition even when users whisper or move their lips without speaking. In this way, Q.ai’s technology addresses the limitations of traditional microphones on smart glasses. Its approach allows users to give instructions without making a sound, just by moving their mouths. It offers a more private way to interact with AI-powered smart glasses, something Meta’s Ray-Ban glasses can’t do.
Another way Apple could use Q.ai’s technology is to develop under-display Face ID, which has been rumoured for years. By leveraging high-precision muscle recognition and instantaneous muscle movements, Face ID could maintain the security of structured light while reducing components, enabling it to be hidden beneath the screen.
Truth be told, Apple needs new products, not just successive iterations of those it already sells. Sure, the iPhone is hugely popular, but there will come a time when it becomes less relevant. With the technology landscape changing so rapidly, it’s hard to predict which new product or technology might emerge out of nowhere and become a default in people’s tech lives. Nobody would have thought that ChatGPT could become so big in just three years.
Story continues below this ad
Looking at its history, Apple has always reinvented the interface. Perhaps it’s time for Apple to start exploring new types of interfaces, and Q.ai’s technology could provide the foundation for its next-generation products.



