If you're like me, you've watched the growth and spread of AI with concern bordering on alarm.
I would still argue the WORST use case is deepfakes of stuff, where people can have their image violated (e.g., the deepfake porn of Taylor Swift) or where they can be lied to (deepfaked political news).
But another concern is the belief among some that, if AI doesn't QUITE replace human-created stuff,, it's "good enough" for most people.
(Which actually mirrors another concern I have: the hollowing-out of the middle,, where there are very high-service, high-price, high-quality stores for the extremely wealthy, the mid-range places - the Sears and JC Penney's - are largely gone, and so what remains are dollar stores and places like Wal-mart, and apparently if you make less than $200K a year, you should be content with those, their products are 'good enough" for you)
AI created stuff is, by its very nature, derivative: it's basically taking what artwork/writing/music it's given access to* and chopping and remixing and not creating new things. (And yes, there's a long history of pastiche and sampling among human creators, but somehow that feels different, not "algorithmic")
(*and a major issue is some platforms sneakily allowing AI to "scrape" the material humans created without being very upfront about that with the creators)
But the other thing is that AI gives the illusion of effortless creation.. Like, "Why should people bother to do this" kind of thing.
I know I've felt that when I see the work of much more innovative knitters, or when I realize I will never play the piano all that well and if I want to hear a *good* version of the Raindrop Prelude, I should pull up one of the recordings on YouTube, instead of struggling through it myself and making mistakes. And it is somewhat discouraging.
But Sam Bergman, as part of an ongoing discussion on why creative people have a very different perception (and an alarmed one) about AI as compared to its promoters, noted: "I feel like that's such a common misconception among adults who didn't grow up steeped in the creative arts - that there's "one crazy trick" that turns you into an artist or poet or performer, rather than years of hard, repetitive work."
Yes. And I'm not even that good at a lot of creative things, but I can tell I get better the more I work at it - just today, I was able to finally play through the Faye Lopez arrangement of the old Swedish hymn "Day by Day" (she arranged it in the style of one of Liszt's "Consolations," and it's a really lovely arrangement, but is somewhat challenging to someone who's an intermediate pianist). And it does make you feel good to master the thing - that's even beyond the fact that you DID kind of "master" it (I still made a few mistakes). It's the feeling of "I worked at this and did this" that's satisfying in a way no "one weird trick" will ever be.
(And yes, I can hear my father now: "Nothing worth doing is ever easy")
But the concerns about AI applies to other things. (And I think there ARE parallels with the discussion of "fewer people learned to play instruments after recorded music became widely available" - again, why do the thing if someone else does it better, and you can just pay money to experience it? Though there's something in the "doing it yourself" part that has value, though it's hard to pin down).
For example: a couple semesters back, I had a student do their in-class research project on a botanical survey of a field. But. They weren't a botanist, and instead of coming to me to get help, or using a printed key, they used one of those apps where you photograph the plant. And they didn't fact-check the app, which was the bigger problem. So they reported a species of grass as being present that is one that's NEVER yet been found in the US. (And it was one that superficially resembled a couple species common here).
And there IS a real danger in outsourcing all knowledge and judgement to a computer - there have been news stories about people having medical-insurance claims for things like *cancer treatment* denied because apparently an AI was evaluating the things. Yes, things like iNaturalist can be helpful - I've used it myself, though mostly when "oh dang, I KNOW this plant and the name is on the tip of my tongue but I can't recall it in this moment" and seeing a list of names makes me go "oh yeah, that's Verbena stricta."
So part of the rise in AI that is concerning to me is the fact that people think it means they can outsource their "thinking" to a computer, and can turn off their brains and their common sense. Or, alternatively - and perhaps more true of my students - they don't have confidence in their own judgement and think "well the hivemind of the internet has to be smarter than I am" (I'm sorry? Have you SEEN what people say on the Internet?)
But the other thing is, when we "give up" and don't try to gain expertise or skill on our own....well, we lose something as humans.
I have people ask me all the time "how do you identify plants as easily as you do?" and my response is some combination of (a) "I've been doing this for 30 years or more at this point" and (b) "I pay attention to details and I don't jump to conclusions" and I think that second IS important, I tell my students not to latch on to a single thing (like, for example, it has compound leaves), that you have to look at the whole plant and if you are going "well this looks kind of like X but it also doesn't" to trust your instinct and consider other species.
And so I think too much reliance on a hivemind or AI or whatever may do at least two things:
1. We stop valuing the act of individual and imperfect creation, and don't bother to write or draw or play music or any number of things that are soul enriching
2. We get dumber. And I mean in the Idiocracy (even though I hate that movie) sense - where we don't want to think for ourselves so we'll just lean on someone else or something else to decide for us. Which may mean we wake up one day and find the thing deciding things we don't like, and it will be too late.
But one thing I know I have to fight against in myself is the "the things I can make and do are only very imperfect" and things like "why bother to try playing Debussy, I'm not good enough, and it's easier to hear a perfect recorded rendition" but I DO think trying and maybe failing has value in it.
And I do think the "grindset techbro" way of thinking - where only the end matters and not the process - I think that's driving a lot of the AI stuff, that "look this is a kind-of-janky picture of a dog but I was able to generate it in far less time* than drawing my own picture which would probably be jankier, so why bother drawing" kind of impoverishes us and.....I don't know, it feels like it would make us less human if we let a computer do all the creative stuff we once did as a species.
I mean, i guess I don't have a problem with some of the drudgery stuff - like, I don't know, restoring old film to take scratches or white spots off of it - might be as well done by artificial intelligence. But there's danger in giving it the power to make important decisions that have an ethical dimension (like: who gets medical treatment when "resources" are limited) and letting it do the creating because the "time" that "frees up" for humans may well only be filled with more drudgery for us to do.
(*and that's absent the entire question of the massive power and water consumption in running and cooling the data centers that churn out AI)
So maybe the task for a lot of us, especially those of us long-trained in being highly self-critical, is to be more accepting of our imperfect efforts, and to remember that doing anything well takes work, and so if you can't draw a dog well at first, keep trying, keep working, maybe eventually you will.
No comments:
Post a Comment