Brief notes
- Farewell to Fart: Janelle Shane
- char-rnn makes cute recipes, obviously wrong; larger pre-trained models make more plausible recipes
- knitting patterns are much more susceptible to minor errors which can just eat up yarn (!)
- if you start with crochet patterns, and use GPT-2 to generate arbitrary text, it can sound realistic, and will steer any conversation onto hats lol
- GPT-3: no longer amusingly weird from scratch
- folk-rnn makes ∞ Irish folk tunes, but people aren’t interested in playing them (AI to fuel buckets of unremarkable content does not respect the consumer)
- Kate Compton: Opulent AI, AI that calls attention to its own artifice
- use GPT-3 to complete a prompt about training a neural network to generate costumes (!)
- horse facts can be adversarial if they’re too basic!
- Q: How many giraffes are in the average living room? A: Two, but they won’t talk to each other!
- text-generating algorithms are getting better at sounding cliché
- Artificial biodiversity: Sofia Crespo
- cf artbreeder
- where is the beauty in a dataset? images? pixels? computational training? the NN itself?
- “Isn’t all art made by humans an execution of reshaping of data processed by neural networks?”
- Visual Indeterminacy in Generative Neural Art
- Codex Seraphinianus: showcase of life, invented by an artist
- Anna Atkins: create an impression of life itself
- Harms from AI research — workers very susceptible to wage theft, and wages are very low (MTurk can be a race to the bottom)
- How should researchers engage with controversion applications of AI?
- math can be weaponized (facial recognition blocking entry to home, etc)
- “I’m not ready to always have an alternative ready just because you’re not prepared to engage with critiques of your work” – Tawana Petty
- “This is not a mathematical problem, so this will not have a mathematical solution, and we cannot offer more sophisticated math instead of engaging as activists”
- Panel, anticipating / mitigating risks, both of participants and products
- social impact: if you’re not willing to engage with communities, the risks are totally different (firsthand risk: finder/publisher controls the narrative!)
- that the AI community has a problem with thinking about the ethical implications is very worrying
- you never see “nursing for good” or “food for good” as often as you see “AI for good”
- the smarter you are, the better you can justify any random decisions
- how do we change incentives?
- risk pyramids
- transphobic research presented at NeurIPS last year
- very easily spotted by trans community
- cf the work of Nicki Washington, at the intersection of race/gender/computer science — cf Ruha Benjamin
- HCI has value-sensitive design, as does STS
- interdisciplinary work helps spread this knowledge
- incentive mechanism: value papers holistically, and not condemning papers as “just dataset papers”
- self-auditing does not work!