According to 9To5Mac, Apple recently confirmed that the open-source Efficient Language Models (OpenELM) AI model it released in April is not used in any of its AI or machine learning features, including Apple Intelligence. This comes days after an investigation found that Apple and other tech giants had used thousands of YouTube captions to train their AI models.
As stated in the report, Apple said it developed the OpenELM model to contribute to the research community and to foster the development of open source large-scale language models. Previously, Apple researchers have described OpenELM as a state-of-the-art open language model.
The company said OpenELM was built for research purposes only and is not intended to power Apple Intelligence features. The AI โโmodels have been released as open source and are widely available on Apple’s machine learning research website.
Last month, a research paper suggested that Apple doesn’t use user data to train Apple Intelligence: The company said its AI models are trained on licensed data, including data selected to power specific features, as well as public data collected by the company’s web crawler, AppleBot.
A recent investigation by Wired suggests that major companies like Apple, NVIDIA, Anthropic, and Salesforce have trained their AI models using captions from over 170,000 YouTube videos from popular content creators. The dataset is part of a larger collection from nonprofit EleutherAI called The Pile.
The tech giant also revealed that it has no plans to release a new version of the OpenELM model.
Video Carousel
Meanwhile, Anthropic spokesperson Jennifer Martinez told Proof News, the publication that conducted the investigation, “Pile contains a small portion of YouTube’s subtitles. YouTube’s terms cover direct use of the platform, which is separate from use of the Pile dataset. Any concerns about potential violations of YouTube’s terms of service should be directed to the Pile authors.”
The Apple Intelligence features will be announced at the company’s WWDC 2024 event and will be available in some form at the launch of the iPhone 16 series. These GenAI features will only be available on the iPhone 15 Pro, iPhone 15 Pro Max, iPads and Macs with the M1 chipset or later.
Meanwhile, Anthropic spokesperson Jennifer Martinez told Proof News, the publication that conducted the investigation, “Pile contains a small portion of YouTube’s subtitles. YouTube’s terms cover direct use of the platform, which is separate from use of the Pile dataset. Any concerns about potential violations of YouTube’s terms of service should be directed to the Pile authors.”
The Apple Intelligence features will be announced at the company’s WWDC 2024 event and will be available in some form at the launch of the iPhone 16 series. These GenAI features will only be available on the iPhone 15 Pro, iPhone 15 Pro Max, iPads and Macs with the M1 chipset or later.
Top Videos
Show all
The deepfake threat in India is real. How can we protect ourselves? Find out at The Breakfast Club
On deepfakes: CNN-News18 interviews Dr. Oren Etzioni, founder of TrueMedia.Org
Facebook and Instagram go down, Meta loses millions of dollars
WhatsApp launches date search feature
Did you know it’s AI, not humans, that scans your resume? | The Breakfast Club | News18
S Aditya
S. Aaditya, special correspondent for News18 Tech, happened to be a journalist.
Location: California, USA
First published: 22 July 2024 13:10 IST