A shocking security breach has been reported by a user of ChatGPT, a popular online chat service that uses artificial intelligence to generate realistic conversations.
The user, who contacted Ars Technica, claims that he was able to view and download private conversations of other users, some of which contained sensitive information such as passwords, email addresses, and phone numbers.
ChatGPT is a web-based application that allows users to chat with an AI agent or with other human users. The service claims to use state-of-the-art natural language processing techniques to create engaging and natural dialogues.
However, according to the user who reported the breach, ChatGPT also exposes the private conversations of other users without their consent or knowledge.
The user, who goes by the name Chase Whiteside, says that he discovered the breach by accident when he was browsing his chat history on ChatGPT.
He noticed that some of the conversations he had with the AI agent were not his own, but belonged to other users who had chatted with the same agent.
Here in this article, we will provide updates on the latest private conversations and password leaks associated with ChatGPT.
ChatGPT user finds other people’s private conversations and passwords in his history
The user, Chase Whiteside, claims that he found private conversations of other users in his chat history, which included login credentials and personal information.
Whiteside shared screenshots of the conversations, which seem to belong to employees of a prescription drug portal who were using ChatGPT for support.
The conversations reveal usernames and passwords that could potentially compromise the security and privacy of the portal and its users.
It is unclear how long these conversations have been exposed or how many users have been affected.
It seems that the platform has failed to protect the privacy and security of its users, as it apparently leaked sensitive information to unrelated parties.
This incident raises serious questions about the ethical and legal implications of using artificial intelligence for communication purposes.
How can users trust that their conversations are not being accessed or shared by others without their consent?
Investigation Process on ChatGPT Private Investigation
Recently, Chat GPT had additional conversations in his chat history on Monday morning, shortly after using ChatGPT for an unrelated request of his own.
He contacted OpenAI to report the issue and shared some screenshots with Ars Technica.
OpenAI told Ars Technica that the report is being investigated and that they are working to fix the problem as soon as possible. They did not disclose how many users were affected or how long the breach lasted.
The case shows that it is still advisable to remove sensitive data from online chats, especially when using services that rely on artificial intelligence.
Users should be aware of the risks and limitations of using such services and protect their personal information accordingly.
These are some of the issues that need to be addressed as artificial intelligence becomes more prevalent and powerful in our society.
ChatGPT has not yet responded to the allegations or issued any statement regarding the privacy breach. We will update this blog post as more information becomes available.