Two items posted to LinkedIn, one about “AI” designed to interdict data gathering by other “AI,” and the other relating to research on gathering biological data to help map one’s “social media DNA” and tailor storytelling. The first item shapes part of my response to the second.
“Researchers Created AI That Hides Your Emotions From Other AI: As smart speaker makers such as Amazon improve emotion-detecting AI, researchers are coming up with ways to protect our privacy,” by Samantha Cole, Vice, 23 August 2019 (posted August 2019)
Will “AI” vs. “AI” become part of our lives, where we enlist sophisticated or intelligent apps to defend against or defeat programs used to gather data from us? This is one of the questions this article raises for me.
“How I’m using biological data to tell better stories – and spark social change,” by Heidi Boisvert, TED TalkAnticorruptionday.org, May 2019 (posted December 2019)
Found this talk by Heidi Boisvert interesting yet also somewhat troubling. If our interfacing with digital devices is indeed “rescripting our nervous systems,” and it is possible to use knowledge of storytelling & our individual responses (“biological signature”) to influence us, where does that lead? Not just the concern Dr. Boisvert mentions about such a capacity being turned into a weapon. It’s also the very notion that “we will soon be consuming media tailored directly to our cravings using a blend of psychographics, biometrics and AI.” Tailored by who and to what ends? I personally would much rather consume general media of my choosing.
But there’s another angle. What if this line of research could go into “AI” designed to work for the individual? Perhaps as an assistant to their finding media content of interest, and also enabling them to be more aware of their own responses to it? (But operating independently of tech or media companies.)
(my response to Dr. Boisvert’s reply)
Thank you, Heidi, for your feedback and question. Three layers to my thinking on this (as an interested non-expert in the area):
First, I’ve been noting in general the asymmetry in research & application of advanced tech. It seems that across sectors that involve interaction between organizations and people, it is the organizations that own or use advanced new technologies, while individuals are the objects or consumers. (There are reasons for that, of course, which would be another discussion.) So I tend to ask what specific scenarios would look like if there were more balance, or even if the initiative and agency were flipped.
Second, and most directly related to your research as I understand it, if one mapped the media DNA of a person, would there be a way to (a) make that information available to them, and (b) allow them to see how it is used in real time with media they are exposed to? Could such added capacities be both a tool for individual self-awareness, and a way of enforcing some transparency in media methods?
Third, I’ve been interested in the notion of “AI” bots/assistants that work at the behest of individuals (and are not beholden to the tech giants). So without having thought this particular system through entirely, I wondered how such a thing might mesh with the kind of research you are doing. What if a person wanted to tweak or vary a media presentation away from what is tailored for them – could a personal “AI” bot intermediating with the presentation accomplish that? And more broadly could that personal bot also assist an individual in seeking out other media? Would this support or work at variance with your original goals regarding culture change?
An example of that “intermediating” role in a different setting would be an “AI” system designed to hide the emotion in one’s speech from other “AI” systems that would read that emotion. [I inserted here a link to the Vice article featured above in this post]
Anyway, I hope this makes a bit more sense. I appreciate your work and thank you for sharing it.