Is Apple Intelligence safe or not? Opinions vary widely among different authorities and stakeholders.
Apple is working hard to balance innovation with privacy as it gets ready to introduce several new artificial intelligence (AI) programs. The company is enhancing its AI features with updates and a partnership to bring OpenAI’s ChatGPT to Apple devices. Also, there are rumors about a Google Gemini integration that will happen in Q4. At the same time, Apple is emphasizing new systems designed to keep users’ data private and secure.
“We’re excited to partner with Apple to bring ChatGPT to their users in a new way. Apple shares our commitment to safety and innovation, and this partnership aligns with OpenAI’s mission to make advanced AI accessible to everyone. Together with Apple, we’re making it easier for people to benefit from what AI can offer,” stated Sam Altman, the CEO of OpenAI.
Is Apple Intelligence safe: The Cupertino firm looks confident
Known for prioritizing user privacy and safety, Apple is striving to maintain its reputation while participating in the competitive AI field. It is setting guidelines that differ from the usual ChatGPT practices and developing a new cloud system to handle some AI requests. However, the plan to integrate ChatGPT directly into devices with an upcoming operating system update could potentially alter this balance for Apple.
Apple has announced that users will soon be able to access ChatGPT through their devices without needing to create a separate account. This popular OpenAI chatbot will be available for free via Apple’s voice assistant Siri, which is set to receive an update, or through writing tools on their devices.
To ensure user safety, Apple has also introduced new privacy measures that impose stricter standards for ChatGPT integration compared to OpenAI’s typical privacy policies. These updates are part of Apple’s effort to maintain its reputation for prioritizing user privacy and security.
In addition to these privacy updates, Apple unveiled Private Cloud Compute, a new cloud intelligence system designed for private AI processing. This system extends the “industry-leading security and privacy” from Apple devices to the cloud. Importantly, the personal user data sent to this system remains accessible only to the user, with no access granted to Apple or any other parties.
Apple has ensured that OpenAI cannot link users’ identities to their requests by obfuscating IP addresses, according to an Apple spokesperson.
“Privacy protections are built in for users who access ChatGPT — their IP addresses are obscured, and OpenAI won’t store requests. ChatGPT’s data-use policies apply for users who choose to connect their account.”
-Apple
Apple Intelligence is not safe according to Elon Musk
Elon Musk, owner of Tesla and X, and a known critic of OpenAI has raised significant concerns over Apple’s new integration of ChatGPT. After Apple’s announcement last week, Musk threatened to ban Apple devices from his companies, stating: “That is an unacceptable security violation.”
Previously, in March, Musk filed a lawsuit against OpenAI, alleging that the company had deviated from its original mission to develop AI for the benefit of humanity. Although he later dropped the lawsuit, his concerns about OpenAI’s practices have persisted.
Apple Intelligence is safe according to Mira Murati
In response to privacy concerns, Apple has outlined new rules for the ChatGPT integration, differing from OpenAI’s typical policies. Notably, requests made to ChatGPT through Apple devices will not be stored by OpenAI.
During Fortune’s MPW dinner on Tuesday evening, Mira Murati, the Chief Technology Officer of OpenAI, responded to Elon Musk’s allegations. Addressing the audience, she said, “That’s his opinion. Obviously, I don’t think so. We care deeply about the privacy of our users and the safety of our products.”
Apple Intelligence is not safe according to EU
The US company revealed that iPhone users in the EU will not have access to new features such as Apple Intelligence, enhancements to the SharePlay screen-sharing function, and iPhone Mirroring. These features, expected to be released later this year, will be withheld in the EU because of the new Digital Markets Act (DMA), which has introduced more stringent regulatory requirements.
One of the main issues is the DMA’s requirement for interoperability, which Apple claims conflicts with its commitment to privacy and security. Interoperability mandates that essential functions like texting, calling, voice messaging, and sharing images and videos must work seamlessly across different operators and competing devices.
Final verdict
Craig Federighi, Apple’s Senior Vice President of Software Engineering, has highlighted that Apple’s Private Cloud Compute extends the iPhone’s industry-leading security to the cloud, ensuring user data is used only to fulfill requests and never stored or accessible by anyone, including Apple. This architecture aims to balance powerful AI capabilities with robust privacy protections, which independent experts can verify for additional transparency.
Michelle Bachelet, UN High Commissioner for Human Rights, has emphasized the urgent need for a moratorium on AI systems that pose serious risks to human rights until adequate safeguards are in place. She warns that AI technologies, including those used by Apple, must be developed and implemented with a strong regard for privacy and human rights to prevent misuse and discrimination.
Brookings Institution has discussed the broader implications of AI on privacy, noting that AI magnifies the ability to use personal information in ways that can intrude on privacy interests. The institution calls for comprehensive privacy legislation to protect against the adverse effects of AI, emphasizing the need for greater transparency and control over personal data used in AI systems.
At the end of the day, we are active participants in the technological world, navigating through various advancements and innovations. While we can never be entirely sure what is truly safe, it is always wise to remain vigilant and cautious about our personal data. The question of whether Apple Intelligence is safe or not ultimately depends on individual perspectives and comfort levels. Therefore, it is crucial for each person to stay informed, understand the privacy measures in place, and make their own decision regarding the use of such technologies. The responsibility lies with you to decide what feels safe and appropriate for your personal data security.
Featured image credit: Apple