The underside line, says William Agnew, a postdoctoral fellow in AI ethics at Carnegie Mellon College and one of many coauthors, is that “something you place on-line can [be] and doubtless has been scraped.”
The researchers discovered 1000’s of situations of validated identification paperwork—together with photos of bank cards, driver’s licenses, passports, and beginning certificates—in addition to over 800 validated job software paperwork (together with résumés and canopy letters), which have been confirmed by means of LinkedIn and different internet searches as being related to actual individuals. (In lots of extra circumstances, the researchers didn’t have time to validate the paperwork or have been unable to due to points like picture readability.)
Plenty of the résumés disclosed delicate info together with incapacity standing, the outcomes of background checks, beginning dates and birthplaces of dependents, and race. When résumés have been linked to individuals with on-line presences, researchers additionally discovered contact info, authorities identifiers, sociodemographic info, face pictures, dwelling addresses, and the contact info of different individuals (like references).

COURTESY OF THE RESEARCHERS
When it was launched in 2023, DataComp CommonPool, with its 12.8 billion knowledge samples, was the biggest present knowledge set of publicly obtainable image-text pairs, which are sometimes used to coach generative text-to-image fashions. Whereas its curators stated that CommonPool was supposed for tutorial analysis, its license doesn’t prohibit industrial use as nicely.
CommonPool was created as a follow-up to the LAION-5B knowledge set, which was used to coach fashions together with Steady Diffusion and Midjourney. It attracts on the identical knowledge supply: internet scraping executed by the nonprofit Frequent Crawl between 2014 and 2022.
Whereas industrial fashions typically don’t disclose what knowledge units they’re skilled on, the shared knowledge sources of DataComp CommonPool and LAION-5B imply that the information units are related, and that the identical personally identifiable info probably seems in LAION-5B, in addition to in different downstream fashions skilled on CommonPool knowledge. CommonPool researchers didn’t reply to emailed questions.
And since DataComp CommonPool has been downloaded greater than 2 million occasions over the previous two years, it’s probably that “there [are]many downstream fashions which might be all skilled on this actual knowledge set,” says Rachel Hong, a PhD pupil in laptop science on the College of Washington and the paper’s lead writer. These would duplicate related privateness dangers.
Good intentions aren’t sufficient
“You possibly can assume that any large-scale web-scraped knowledge at all times accommodates content material that shouldn’t be there,” says Abeba Birhane, a cognitive scientist and tech ethicist who leads Trinity School Dublin’s AI Accountability Lab—whether or not it’s personally identifiable info (PII), youngster sexual abuse imagery, or hate speech (which Birhane’s personal analysis into LAION-5B has discovered).
The underside line, says William Agnew, a postdoctoral fellow in AI ethics at Carnegie Mellon College and one of many coauthors, is that “something you place on-line can [be] and doubtless has been scraped.”
The researchers discovered 1000’s of situations of validated identification paperwork—together with photos of bank cards, driver’s licenses, passports, and beginning certificates—in addition to over 800 validated job software paperwork (together with résumés and canopy letters), which have been confirmed by means of LinkedIn and different internet searches as being related to actual individuals. (In lots of extra circumstances, the researchers didn’t have time to validate the paperwork or have been unable to due to points like picture readability.)
Plenty of the résumés disclosed delicate info together with incapacity standing, the outcomes of background checks, beginning dates and birthplaces of dependents, and race. When résumés have been linked to individuals with on-line presences, researchers additionally discovered contact info, authorities identifiers, sociodemographic info, face pictures, dwelling addresses, and the contact info of different individuals (like references).

COURTESY OF THE RESEARCHERS
When it was launched in 2023, DataComp CommonPool, with its 12.8 billion knowledge samples, was the biggest present knowledge set of publicly obtainable image-text pairs, which are sometimes used to coach generative text-to-image fashions. Whereas its curators stated that CommonPool was supposed for tutorial analysis, its license doesn’t prohibit industrial use as nicely.
CommonPool was created as a follow-up to the LAION-5B knowledge set, which was used to coach fashions together with Steady Diffusion and Midjourney. It attracts on the identical knowledge supply: internet scraping executed by the nonprofit Frequent Crawl between 2014 and 2022.
Whereas industrial fashions typically don’t disclose what knowledge units they’re skilled on, the shared knowledge sources of DataComp CommonPool and LAION-5B imply that the information units are related, and that the identical personally identifiable info probably seems in LAION-5B, in addition to in different downstream fashions skilled on CommonPool knowledge. CommonPool researchers didn’t reply to emailed questions.
And since DataComp CommonPool has been downloaded greater than 2 million occasions over the previous two years, it’s probably that “there [are]many downstream fashions which might be all skilled on this actual knowledge set,” says Rachel Hong, a PhD pupil in laptop science on the College of Washington and the paper’s lead writer. These would duplicate related privateness dangers.
Good intentions aren’t sufficient
“You possibly can assume that any large-scale web-scraped knowledge at all times accommodates content material that shouldn’t be there,” says Abeba Birhane, a cognitive scientist and tech ethicist who leads Trinity School Dublin’s AI Accountability Lab—whether or not it’s personally identifiable info (PII), youngster sexual abuse imagery, or hate speech (which Birhane’s personal analysis into LAION-5B has discovered).