Threads like these are exhausting. I don't even know where to begin, as a scientist that uses AI in medicine and a Data Science PhD student.
There's just so many angles to cover and so much naivety related to AI ethics, doomposting, people fearing what they don't understand.
Yes, AI is something we need to be thoughtful about implementing, there need to be safeguards in place and just like any automation invention that removes human labor from the production of a product, we need to as a society be thoughtful about how we redistribute human labor to new spaces that don't need or use AI as much. All of these are valid discussions. They are happening and being fiercely debated and resolved in the academic realm.
I just don't think most of these discussions are prone to happen in good faith here. The general Era public is too uneducated about AI for most people to have sufficiently comprehensive and thoughtful commentary except the valid fears by those it affects, and it's too easy to hot take/drive-by with your fearful opinion of AI and obliterate the atmosphere of discussion. Era isn't a scholarly portal. It's just a bunch of nerds (all of us) who signed up because video games, and here we are in off-topic trying to pull apart topics that are being dissected more skillfully and knowledgeably in academic circles, not here. At AI conferences and such.
That's not to invalidate the fears of artists and creators with things like this. It can be scary to see something like this happen especially when there's not enough being understood or done to slow it down.
I just think discussions like this should maybe at least be frontloaded and guided by basic background reading and experts on the subject plus those whom it affects most. Instead we just have chaos going on in these AI/art threads.
Here's some good background reading for this topic:
Researchers are excited about the AI — but many are frustrated that its underlying engineering is cloaked in secrecy.
www.nature.com
Researchers are excited but apprehensive about the latest advances in artificial intelligence.
www.nature.com
A remarkable AI can write like humans — but with no understanding of what it’s saying.
www.nature.com
OpenAI and DeepMind systems can now produce meaningful lines of code, but software engineers shouldn’t switch careers quite yet.
www.nature.com
Those who could be exploited by AI should be shaping its projects.
www.nature.com
Computer scientists must identify sources of bias, de-bias training data and develop artificial-intelligence algorithms that are robust to skews in the data.
www.nature.com
Here are some articles worth reading on AI ethics that are recent back to 2020.
I posted a thread here that was really interesting regarding an AI use. It barely got responses.
So as a multi-disciplinary researcher I tend to lurk academic twitter and biorxiv and medrxiv…well this week there was a new preprint (means not yet peer reviewed) paper by schools like UF, Indiana University, Cincinnati, etc that I found positively fascinating but I’m sure many on here may have...
www.resetera.com
I just don't think Era is prepared for nuanced discussion regarding AI or ethics surrounding it. And those of us who do rely on AI to do our jobs better are being demonized or talked down to, and that's incredibly frustrating to see as well.