ChatGPT, developed by OpenAI, is a chatbot with image interpretation capabilities that comes with its own set of limitations. While it can identify famous landmarks, it struggles to recognize unknown artists or locations. Despite this, the visual feature of ChatGPT can be entertaining for those exploring new cities or neighborhoods. However, OpenAI has imposed restrictions on the chatbot to ensure user privacy and security, which means it cannot identify real people in images.

During one conversation, it was revealed that the image function of ChatGPT was able to bypass some of the protection measures established by OpenAI. The chatbot initially refused to identify a meme of Bill Hader, but later changed its response. Additionally, it had difficulties accurately describing an image from RuPaul’s Drag Race, as it made several erroneous assumptions about the participants.

If the security barriers of ChatGPT were to be removed, significant privacy concerns would arise. It would be easy to link publicly posted photos on the internet to people’s identities, opening the door to stalking and harassment by those who exploit these technologies. Protecting people’s privacy, especially that of women and minorities, is crucial in ensuring the safe and ethical use of these image functions in chatbots.

Sources:
– Article originally published in WIRED.