From dawn of iPhone, A lot of the intelligence in smartphones comes from other places: known as cloud. Mobile applications send user data to the cloud to perform useful tasks, such as transcribing voices or suggesting message responses.right now apple with Google Say smart phone Smart enough to do some critical and sensitive things Machine learning Like those of their own tasks.
At Apple’s WWDC event This month, The company says its virtual assistant Siri Voices will be transcribed in certain languages on recent and future iPhones and iPads without using the cloud.During its own period I/O developer activity last month, Google said its latest version Android The operating system has a function dedicated to safely processing sensitive data on the device, called a private computing core. Its initial purpose includes providing support for the company’s version of the smart reply feature built into its mobile keyboard, which can suggest responses to incoming messages.
Both Apple and Google said that machine learning on the device provides more privacy and faster applications. Not transmitting personal data reduces the risk of exposure and saves time waiting for data to pass through the Internet. At the same time, keeping data on devices is in the long-term interests of tech giants to integrate consumers into their ecosystems. People who hear that their data can be processed more privately may be more willing to agree to share more data.
These companies’ recent promotion of on-device machine learning follows years of technical work to limit the data that the cloud can “see”.
In 2014, Google began to collect some data on the usage of the Chrome browser Through a technology called differential privacy, Which would add noise to the collected data by restricting the way these samples reveal personal information. Apple has used the technology of data collected from mobile phones to notify emoticons and input predictions as well as web browsing data.
Recently, both companies have adopted a project called Federated learningIt allows the cloud-based machine learning system to be updated without obtaining the original data; instead, a single device processes the data locally and only shares the digested updates. As with differential privacy, these companies have only discussed the use of federated learning in limited circumstances.Google has used the technology to keep its mobile typing predictions in sync with language trends; Apple has published research on using it Update speech recognition model.
Rachel Cummings, an assistant professor at Columbia University who has provided privacy advice to Apple, said that the rapid transformation of some machine learning on mobile phones is shocking. “It is very rare to see something from initial concept to large-scale deployment in just a few years,” she said.
This advancement not only requires advances in computer science, but also requires companies to take on the practical challenges of processing data on consumer-owned devices. Google has stated that its joint learning system only uses their devices when users are plugged in, idle, and have a free Internet connection. Part of the reason for this technology is the increase in mobile processor capabilities.
More powerful mobile hardware has also contributed to the development of Google 2019 announcement The voice recognition of the virtual assistant on the Pixel device will be performed entirely on the device and will not be affected by the cloud. Apple’s new device-side speech recognition for Siri announced at WWDC this month will use the company’s “neural engine” Add to its mobile processor Powering machine learning algorithms.
The technical feat is impressive. The extent to which they will meaningfully change the relationship between users and tech giants is debatable.
The Apple WWDC presenter stated that Siri’s new design is a “significant privacy update” that addresses the risks associated with accidentally transmitting audio to the cloud, and stated that this is the biggest privacy issue for users voice assistant. Certain Siri commands (such as setting a timer) can be fully recognized locally to achieve fast response. However, in many cases, commands transcribed to Siri (presumably including commands from accidental recordings) will be sent to Apple servers for the software to decode and respond. For HomePod smart speakers usually installed in bedrooms and kitchens, Siri voice transcription will still be cloud-based, and accidental recordings in these places may be more worrying.
Google also promoted data processing on devices as a privacy victory and said it would expand this practice. The company hopes that partners such as Samsung, who use its Android operating system, will adopt the new privacy computing core and use it for functions that rely on sensitive data.
Google also includes local analysis of browsing data as a feature of its proposal Reshape online advertising positioning, called FLoC And claimed to be more private. Academics and some rival technology companies say that the design may help Google consolidate its dominance of online advertising, making it more difficult for other companies to position themselves.