top of page
Search
  • Writer's pictureGenesse Miles

Reading Blog 1

This article was really interesting because the purpose of it was a research paper on how effective this technology is, but it is also incredibly biased. This was written by the people who made Glaze, a tool that lets artists "protect" their work from being used by AI. The article talks about how the program works to confuse the AI that is attempting to copy it into thinking the image is a style that it is not. The technology makes minuscule edits to the image that are virtually undetectable to the eye. The entire article talks about how many artists are not okay with their art being "stolen" and how they also think that Glaze is effective against AI. They also call the artists who have had thier work used "victims" or "targets" and heavily lay on that fear factor that this is an issue that artists should be concerned about. I would like to see a study similar to this one without the bias. I know alot of artists are concerned about AI being able to do what they do and copy their work, but the entire article is written with this cynical backdrop that makes it hard to digest the actual information in the article. I think that they could find some more interesting information that isnt so fear mongering if they looked at actual data. This is genuinely a cool tool, the intelligent model that they use essentially broke down the AI's method of replication and modifies the image by pixels to guide the AI into detecting something that isn't there.




0 views0 comments

Comments


bottom of page