A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
TechnologyReview, December 19, 2022
In the fall of 2020, gig workers in Venezuela shared a series of images on online forums that were captured by prototypes of iRobot’s Roomba J7 series vacuum robots. These images depicted everyday household scenes and were sent to Scale AI, a startup that contracts workers to label data used to train artificial intelligence. MIT Technology Review obtained 15 screenshots of these private photos, which had been posted to closed social media groups. The images varied in content and sensitivity, and depicted rooms from homes around the world, some with people or animals present.
iRobot, the largest vendor of robotic vacuums and recently acquired by Amazon for $1.7 billion, confirmed that these images were taken by its Roombas in 2020. These devices were given to “paid collectors and employees” who signed written agreements acknowledging that they were sending data streams, including video, back to the company for training purposes. The devices were labeled with a bright green sticker reading “video recording in progress,” and it was the responsibility of the paid data collectors to “remove anything they deem sensitive from any space the robot operates in, including children.” iRobot believes that anyone whose photos or video appeared in the streams had agreed to allow their Roombas to monitor them.
The incident raises concerns about the potential for privacy breaches and raises questions about the extent to which individuals can give informed consent when it comes to the collection and use of their personal data by companies. It’s worth noting that these sorts of scenes are regularly captured and sent back to the cloud by internet-connected devices, although they usually have stricter storage and access controls in place.
Read more about these images in the article at the link
AI Injection part 2: OpenAI’s APIs are broken by design
Algorithm.co.il, December 21, 2022
Imagine that you’re using a Chrome extension to analyze the code in a GitHub repository and get a summary of what it does. You use it on a Python library that’s supposed to help with training artificial intelligence models, but you end up installing a bitcoin miner on your powerful EC2 instance without realizing it. How did that happen?
This is an example of “AI injection,” where the action to be taken and the input to be processed are not separated, leading to security issues similar to format string attacks and SQL injections. One specific API that can be problematic in this regard is OpenAI’s completion endpoint, which is intended for tasks like summarizing text for second graders. While this API is impressive in its capabilities, it becomes dangerous when the instruction and the input are not clearly separated.
For instance, you might go to the Chrome Web Store and search for a tool to summarize a website, like the extension “Summarize.” It usually works well, extracting the text from the website and adding the instruction “Rewrite this for brevity, in outline form:” before the text. However, it’s easy to hack by adding some text with an HTML element that has a “display: none” style. This demonstrates how the instruction and the input are not separated, leading to problems.
It’s important to be careful when designing online apps, Chrome extensions, or online services that use AI, and to make sure that the code is secure and of high quality. This is especially true when processing content produced by a third party and showing the result to users, such as with website summaries or code explanations.
Read the full article at the link
Mom Banned from N.Y.C. Venue Kicked Out of Christmas Show When Spotted by Face Recognition Technology
People, December 21, 2022
A lawyer was denied entry to a Christmas show at Radio City Music Hall in New York City due to her job and the involvement of her law firm in ongoing litigation against the venue’s operator, Madison Square Garden Entertainment. Kelly Conlon, who works for a New Jersey-based firm, was told by security guards to leave shortly after entering the lobby, leading her to believe that facial recognition technology was being used at the venue. According to Conlon, the guards knew her name and the firm she was associated with before she even told them.
MSG Entertainment has a policy that prohibits attorneys from firms with ongoing litigation against the company from attending events at its venues until the litigation has been resolved. The company stated that Conlon was “notified in advance that she would be denied entry” and that facial recognition technology is used to ensure the safety of guests and employees. However, Conlon claimed that she thought a recent court ruling made it so that ticket holders “may not be denied entry to any shows.”
This incident highlights the potential consequences of facial recognition technology being used in public venues. While it may be effective at ensuring the safety of guests and employees, it can also lead to issues such as privacy concerns and discrimination. It is important for companies to consider the potential impacts of this technology and ensure that its use is transparent and fair. In this case, MSG Entertainment’s policy of denying entry to attorneys from firms with ongoing litigation against the company raises questions about whether it is being used as a means of punishment or retaliation.
Read more about this incident in the article at the link.
Subscribe for updates
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.