Technical News

Texas AG to investigate meta and character.

The Texas Attorney General, Ken Paxton, announced his intention to investigate Meta Meta Ai Studio and the character to offer AI chatbots that can claim to be health tools, and potentially abuse data collected from miners’ users.

Paxton says that AI chatbots of one or the other platform “can present themselves as professional therapeutic tools”, to the point of lying on their qualifications. This behavior which can leave younger users vulnerable to misleading and inaccurate information. Since AI platforms are often based on user prompts as another source of training data, one or the other company could also violate the confidentiality of young users and abuse their data. This is particularly interesting for Texas, where the scope law requires limits specific to what companies can do with data collected from minors and require platform supply tools so that parents can manage the confidentiality parameters of their children’s accounts.

For the moment, the Attorney General has submitted requests for civil investigation (CID) both to meta and to character. As Techcrunch Notes, neither Meta nor Character. AD says that their Chatbot AI platforms should not be used as mental health tools. This does not prevent it from being multiple of “therapist” and “psychologists” chatbots on the character. This does not prevent companies from companies from claiming that they are approved professionals, such as 404 media reported in April.

“The characters created by the user on our site are fictitious, they are intended for entertainment, and we have taken robust measures to make it clear,” said a character spokesperson. “For example, we have important warnings in each cat to remind users that a character is not a real person and that all that a character said should be treated as a fiction.”

Meta shared a similar feeling in her comment. “We clearly label the AIS, and to help people better understand their limits, we include a warning that the answers are generated by AI – not people,” said society. Meta-owners are also supposed to “tell users to search for medical professionals or qualified security if necessary”. The sending of people to real resources is good, but ultimately, the warnings themselves are easy to ignore and do not act as much an obstacle.

With regard to confidentiality and data use, Meta’s privacy policy and character privacy policy. AI recognize that data is collected from user interactions with AI. Meta collects things like prompts and comments to improve the performance of AI. Character. Ai records things like identifiers and demographic information and says that information can be used for advertising, among other applications. The way in which one or the other policy applies to children and corresponds to the Scope Act of Texas, seems to depend on the ease with which it is to create an account.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button