Elon Musk has stated on X that if Apple goes forward with plans announced at WWDC24 to integrate Siri with OpenAI’s ChatGPT, he will ban Apple devices from his companies.
“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.” He then followed this up with “And visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage.”
Musk has even gone so far as to reply to Tim Cook, CEO of Apple, to express his displeasure. “Don’t want it. Either stop this creepy spyware or all Apple devices will be banned from the premises of my companies.”
What is the actual integration between Siri and OpenAI?
However, it appears that Musk’s knee-jerk reaction is down to a misunderstanding about how the Siri-OpenAI integration will work. Apple announced Apple Intelligence at their conference this week, with several changes to Siri.
One of the changes will be the ability for the assistant to recognize when a larger and more complex answer might be called for and to offer the user the option to have their question passed through to OpenAI’s ChatGPT. The integration at an OS level that Musk is concerned about doesn’t seem to exist.
Other Apple apps will also have the option to call on OpenAI’s features, but it will remain an option for users to access should they so wish. “Of course, you’re in control over when ChatGPT is used and will be asked before any of your information is shared. ChatGPT integration will be coming to iOS 18 iPadOS 18 and macOS Sequoia later this year,” said Apple SVP of Software Engineering Craig Federighi during the Keynote at WWDC24.
OpenAI also confirmed this in a blog post, stating “Requests are not stored by OpenAI, and users’ IP addresses are obscured. Users can also choose to connect their ChatGPT account, which means their data preferences will apply under ChatGPT’s policies.”
Elon Musk doesn’t seem too bothered with the reality of the matter, however. In a further post on X, he said “It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy! Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”
In response to this post, a Community Note has appeared to clarify the truth, saying “Apple has developed their foundational models which run on-device (locally) and have approximately 3 billion parameters. For tasks that require more computing, Apple either uses Private Cloud Compute (open to verify for privacy) or OpenAI with an additional confirmation.”