• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

hobblygobbly

Member
Oct 25, 2017
7,579
NORDFRIESLAND, DEUTSCHLAND
now we have posts saying mock people and AI despite its massive contributions to many fields like medicine, along with other posts equating it to rape and some dark cabal of engineers murdering artists

this forum is going off the deep end again now with AI discussion. sometimes its like reading those crazy fucking comments on social media. some of you need to seriously touch grass
 

hobblygobbly

Member
Oct 25, 2017
7,579
NORDFRIESLAND, DEUTSCHLAND
It's the difference between AI assistance and AI takeover.
Most people only have problems with AI when it is used to completely remove the human element.
Since the dawn of civilisation humanity has been inventing methods and technology to remove the human element or aid it, that's not unique to AI. AI in general isn't anything new changing how civilisation has progressed regarding that. Even just the way we farm has changed many times in many ways for thousands of years, changing the human element in its involvement. And that applies to all facets of life
 
Since the dawn of civilisation humanity has been inventing methods and technology to remove the human element or aid it, that's not unique to AI. AI in general isn't anything new changing how civilisation has always existed though. Even just way we farm has changed many times in many ways for thousands of years, changing the human element in its involvement. And that applies to all facets of life
It's stealing people's work without their consent to do so. Artists and creatives have been put in a really terrible place because of these models and they every right to express their anger about it.
 

Cien

Member
Oct 25, 2017
2,526
now we have posts saying mock people and AI despite its massive contributions to many fields like medicine, along with other posts equating it to rape and some dark cabal of engineers murdering artists

this forum is going off the deep end again now with AI discussion. sometimes its like reading those crazy fucking comments on social media. some of you need to seriously touch grass

False equivalence.

Medical application - assisting a profession. Doctor still has a job. Doctor still has livelihood.

A.I. Art - Intended to completely replace the human element in it. Artists are going to be put out of work. This is already happening. You have people glowing about how they do not have to pay a creative person anymore.

This is some straight up gaslighting for creatives who are (rightfully) threatened by this.
 

hobblygobbly

Member
Oct 25, 2017
7,579
NORDFRIESLAND, DEUTSCHLAND
False equivalence.

Medical application - assisting a profession. Doctor still has a job. Doctor still has livelihood.

A.I. Art - Intended to completely replace the human element in it. Artists are going to be put out of work. This is already happening. You have people glowing about how they do not have to pay a creative person anymore.

This is some straight up gaslighting for creatives who are (rightfully) threatened by this.
I'm not making any equivalence lol I agree with you, I'm replying to posts that was mentioning AI in general, not AI art, I didn't equate this to AI art. I'm against using artists work without permission as well. I'm talking about people shitting on AI in general, as were some other posters were pointing out other than me as well. Not talking about AI art in the last few posts
 
Last edited:

Orayn

Member
Oct 25, 2017
11,004
Yeah, access to effortless deepfakes that most people can't distinguish from reality isn't really compatible with having civilization.

I hope that whatever incomprehensible manmade horrors await us was worth it, AI bros. I guess you got to make some pretty pictures for a while.
 

Kalentan

Member
Oct 25, 2017
44,717
Yeah, access to effortless deepfakes that most people can't distinguish from reality isn't really compatible with having civilization.

I hope that whatever incomprehensible manmade horrors await us was worth it, AI bros. I guess you got to make some pretty pictures for a while.

You know, we destroyed civilization but just for that brief moment we no longer had to pay artists, it was wonderful. -Probably something an AI bro will say.
 

Thorrgal

Member
Oct 26, 2017
12,359
most political photos will be blurry 360p images in your facebook feed, pretty hard to even see small errors in image then when you kinda just glaze over it as compression artefacts.

Many people that falls for that wants to fall for that, Trump said the most preposteous lies and people believed him because they wanted to believe him, he didn't need fake AI pics.

I do wonder what this will do for people thay believe in UFOs though, after decades of blurry pictures we finally going to have high definition ones of all kind of aircrafts and alien creatures lol
 
Oct 27, 2017
5,411
False equivalence.

Medical application - assisting a profession. Doctor still has a job. Doctor still has livelihood.

A.I. Art - Intended to completely replace the human element in it. Artists are going to be put out of work. This is already happening. You have people glowing about how they do not have to pay a creative person anymore.

This is some straight up gaslighting for creatives who are (rightfully) threatened by this.

In the end, this is an administration problem, not a technical problem. The same thing could be said about the invention of the car, and how it put out of work millions of people around the world who worked in animal husbandry, horseshoe repair, cart maintenance, etc. The mechanization of transport was hugely disruptive, but also was a huge step forward for society. AI has the same potential.

So, knowing that, we then have to apply ethics. AI can potentially (and certainly, eventually will) do a better job than humans at many things, potentially including medical diagnosis, art creation, copy editing, authoring of work, legal research, admin assistant work, etc. It's not a question of if, but when. The solution isn't to not allow AI to work in these areas, it's to have a society where people don't rely on this work to survive. Star Trek utopia, if you will. Because if, for example, the US banned AI being involved in art, medicine, writing, etc, there is no doubt that other countries will still use it, and people will lose jobs in the US anyway because the other country will get the business/production.

No one should be happy about people being out of work in any profession. But, it's been happening constantly over the last 120+ years, but primarily in blue collar work. Now it's affecting white collar (which includes creative fields). The reality is that automation (whether mechanical or AI-driven) will continue to put people out of work. We can't stop it, we need to apply ethics and keep people safe, occupied, and compensated.

This is a political issue, not scientific. And that doesn't make me optimistic, I just want to point that out.
 

Kalentan

Member
Oct 25, 2017
44,717
We can't stop it, we need to apply ethics and keep people safe, occupied, and compensated.

This is a political issue, not scientific. And that doesn't make me optimistic, I just want to point that out.

Humanity will be fucked over before they do anything. But also what does "occupied" even mean? Like "Oh man, I can't make money anymore doing what I love, but I'm so happy the government gave me a shit job that they haven't decided to automate yet!"
 

Cien

Member
Oct 25, 2017
2,526
In the end, this is an administration problem, not a technical problem. The same thing could be said about the invention of the car, and how it put out of work millions of people around the world who worked in animal husbandry, horseshoe repair, cart maintenance, etc. The mechanization of transport was hugely disruptive, but also was a huge step forward for society. AI has the same potential.

So, knowing that, we then have to apply ethics. AI can potentially (and certainly, eventually will) do a better job than humans at many things, potentially including medical diagnosis, art creation, copy editing, authoring of work, legal research, admin assistant work, etc. It's not a question of if, but when. The solution isn't to not allow AI to work in these areas, it's to have a society where people don't rely on this work to survive. Star Trek utopia, if you will. Because if, for example, the US banned AI being involved in art, medicine, writing, etc, there is no doubt that other countries will still use it, and people will lose jobs in the US anyway because the other country will get the business/production.

No one should be happy about people being out of work in any profession. But, it's been happening constantly over the last 120+ years, but primarily in blue collar work. Now it's affecting white collar (which includes creative fields). The reality is that automation (whether mechanical or AI-driven) will continue to put people out of work. We can't stop it, we need to apply ethics and keep people safe, occupied, and compensated.

This is a political issue, not scientific. And that doesn't make me optimistic, I just want to point that out.

It is true no one likes being made obsolete. The biggest issue most creatives have is this particular tech was created for the explicit purpose of taking from artists. The creators have outright stated that. With medical AI, for example, it was made to assist, complement in the medical field. Basically one application of AI is far more hostile than the other.

In a perfect world, sure, we let AI do the work and sit back in a utopia. but as it is, and the direction we are headed, that is the furthest thing from happening.
 

Orayn

Member
Oct 25, 2017
11,004
In the end, this is an administration problem, not a technical problem. The same thing could be said about the invention of the car, and how it put out of work millions of people around the world who worked in animal husbandry, horseshoe repair, cart maintenance, etc. The mechanization of transport was hugely disruptive, but also was a huge step forward for society. AI has the same potential.

So, knowing that, we then have to apply ethics. AI can potentially (and certainly, eventually will) do a better job than humans at many things, potentially including medical diagnosis, art creation, copy editing, authoring of work, legal research, admin assistant work, etc. It's not a question of if, but when. The solution isn't to not allow AI to work in these areas, it's to have a society where people don't rely on this work to survive. Star Trek utopia, if you will. Because if, for example, the US banned AI being involved in art, medicine, writing, etc, there is no doubt that other countries will still use it, and people will lose jobs in the US anyway because the other country will get the business/production.

No one should be happy about people being out of work in any profession. But, it's been happening constantly over the last 120+ years, but primarily in blue collar work. Now it's affecting white collar (which includes creative fields). The reality is that automation (whether mechanical or AI-driven) will continue to put people out of work. We can't stop it, we need to apply ethics and keep people safe, occupied, and compensated.

This is a political issue, not scientific. And that doesn't make me optimistic, I just want to point that out.
All I see in this post is handwringing about how, yes, AI is going to be used in unethical ways that will ruin billions of peoples' lives, but we have to be very nice and polite because the tech itself is somehow blameless in that process. Also, we shouldn't be mad at AI because it COULD be used for good purposes in an ideal world. Pretty worthless sentiment, to be perfectly honest.

But hey, congrats on your completely unwillingness to take any stance on the ethics of creating the Torment Nexus because that's someone else's problem.
 
Last edited:

Duebrithil

Member
Oct 25, 2017
831
All I see in this post is handwringing about how, yes, AI is going to be used in unethical ways that will ruin billions of peoples' lives, but we have to be very nice and polite because the tech itself is somehow blameless in that process. Also, we shouldn't be mad at AI because it COULD be used for good purposes in an ideal world. Pretty worthless sentiment, to be perfectly honest.

Could? AI is already being used for good right now: physics, weather simulation, protein folding...

I'm more pissed about artist's works being used without their consent to train those models. In the hypothetical scenario that they were trained without stealing I'd still view the work of human artists as relevant: Industrialized vs Handcrafted.
 

Jroc

Banned
Jun 9, 2018
6,145
now we have posts saying mock people and AI despite its massive contributions to many fields like medicine, along with other posts equating it to rape and some dark cabal of engineers murdering artists

this forum is going off the deep end again now with AI discussion. sometimes its like reading those crazy fucking comments on social media. some of you need to seriously touch grass

Copyright infringement and data ethics are a big issue, but some people give off serious luddite energy when it comes to AI.

I try not to be overly pessimistic when it comes to technology breakthroughs.

Of course it is, this won't end civilization lol

It just lowers the bar for things that have already been possible for a while. It'll be interesting to see how the world adapts to the ease of use though.
 
Oct 27, 2017
5,411
Humanity will be fucked over before they do anything. But also what does "occupied" even mean? Like "Oh man, I can't make money anymore doing what I love, but I'm so happy the government gave me a shit job that they haven't decided to automate yet!"

I agree with your concern, and I think politicians won't have the answer (or won't implement UBI if it was the answer, etc).

It is true no one likes being made obsolete. The biggest issue most creatives have is this particular tech was created for the explicit purpose of taking from artists. The creators have outright stated that. With medical AI, for example, it was made to assist, complement in the medical field. Basically one application of AI is far more hostile than the other.

In a perfect world, sure, we let AI do the work and sit back in a utopia. but as it is, and the direction we are headed, that is the furthest thing from happening.

I agree, I don't see world leaders actually implementing fair solutions. However, I disagree that other advancements were only made to assist. They have only been made that way so far, because they haven't succeeded in fully replacing doctors, etc. But you can bet that if an insurance company or private hospital system in the US can effectively replace a $500,000+/yr specialist position (especially related to diagnostics) and patients will accept it, then they will.

All any efficiency/automation requires is:
  • Acceptance from leadership (which weights costs/benefits)
  • Acceptance from customers (which thinks about costs/comfortable with)
If people feel comfortable paying less for stock photos that are AI generated, for example, and a stock photo company is okay with using AI to make stock photos, then it will happen. I think, realistically, people are more likely okay with using AI in place of human artists than they are with replacing human doctors with AI, when diagnosing things. At least, for now. But that will almost certainly change.

All I see in this post is handwringing about how, yes, AI is going to be used in unethical ways that will ruin billions of peoples' lives, but we have to be very nice and polite because the tech itself is somehow blameless in that process. Also, we shouldn't be mad at AI because it COULD be used for good purposes in an ideal world. Pretty worthless sentiment, to be perfectly honest.

But hey, congrats on your completely unwillingness to take any stance on the ethics of creating the Torment Nexus because that's someone else's problem.

You're putting words in my mouth. To address your points:
  • Technology has already put millions out of work over the last 120+ years (basically since the industrial revolution). In fact, you could argue that even basic things like improvements in farming practices over the last 2,000+ years have put people out of work, as fewer people are required to produce the same food. The main difference with modern automation is that instead of a technology taking 10-100 years to phase in, it can happen within a year or two, now
  • I question that "billions" of lives will be ruined by automation of any sort. In the past, those who could not go into a certain profession inevitably went into another. However, I do agree that millions will face hardship/ruin in much shorter periods of time than in the past
  • I never said anything about being nice and polite about these changes
  • Technology/AI is, indeed, blameless, just as the internal combustion engine was blameless for destroying the horse industry
  • I never said people shouldn't be mad about some implementation of automation
  • AI is already used for good things, that's not in question
AI, like the combustion engine, electricity, nuclear physics, and other things, is, at the end of the day, a tool. It will be used by leaders, industry, and consumers based on what the cost/benefits are. And those calculations will vary based on location. I am not optimistic that it will be rolled out in good ways, or that consumers will even care that much about people losing jobs (until they lose theirs). History is full of examples of people not caring about bad things until they are personally effected. There have been entire genocides where people turn the other way because their own personal experience has improved as a result of horror. And AI will make many people's lives better, even in the sense that art is cheaper (at the expense of artists no longer having an income). That doesn't make the loss of artistic jobs good, it's just the reality that most people won't care because now they can save money (because they themselves are making less). Nuclear power replacing tens of thousands of coal miner jobs was bad for the coal miners, but the average person in most places didn't even think about this.

Don't assume I am in favour of this change that is happening in our lives, I am simply commenting on what I think will happen.
 

Orayn

Member
Oct 25, 2017
11,004
It just lowers the bar for things that have already been possible for a while. It'll be interesting to see how the world adapts to the ease of use though.
I have a bold prediction: We will adapt very badly or not at all.

Facebook is already directly responsible for genocide. Numerous people have been killed in moral panics that started from rumors and fake stories spread via WhatsApp. We aren't able to handle the destructive effects of smartphones and the social media that we have right now. Now imagine that with the ability to easily create photorealistic deepfakes of anyone doing anything.
 

Thorrgal

Member
Oct 26, 2017
12,359
I have a bold prediction: We will adapt very badly or not at all.

Facebook is already directly responsible for genocide. Numerous people have been killed in moral panics that started from rumors and fake stories spread via WhatsApp. We aren't able to handle the destructive effects of smartphones and the social media that we have right now. Now imagine that with the ability to easily create photorealistic deepfakes of anyone doing anything.

Humanity doesn't need AI to commit genocide or go to war, or Facebook or Whatsapp, or the Internet for that matter. That's just the excuse, they would find (have been finding) another for milennia.
 

Orayn

Member
Oct 25, 2017
11,004
Humanity doesn't need AI to commit genocide or go to war, or Facebook or Whatsapp, or the Internet for that matter. That's just the excuse, they would find (have been finding) another for milennia.
It's not "just the excuse" when it directly enables new methods and huge efficiency gains in causing harm. It's more akin to the invention of the machine gun allowing colonial powers to mow people down in ways that weren't possible before.
 

Thorrgal

Member
Oct 26, 2017
12,359
It's not "just the excuse" when it directly enables new methods and huge efficiency gains in causing harm. It's more akin to the invention of the machine gun allowing colonial powers to mow people down in ways that weren't possible before.

No is not. The main purpose of the machine gun was killing people.
 

Zeliard

Member
Jun 21, 2019
10,952

LGHT_TRSN

Member
Oct 25, 2017
7,141
When people saw the first moving image of a train coming towards them they freaked the fuck out.

Ultimately this stuff is going to drastically change the way we view things as a human race, much like film and technology already have and will continue to do.

If you can't trust random images on the internet then we won't, just like we've learned not to trust whatever the fuck anyone says on the internet.

Like...I get it....deepfakes are crazy and can be used in nefarious ways.....but that is true of nearly every technology ever invented....
 

Soi-Fong

Member
Oct 29, 2017
1,482
Illinois
now we have posts saying mock people and AI despite its massive contributions to many fields like medicine, along with other posts equating it to rape and some dark cabal of engineers murdering artists

this forum is going off the deep end again now with AI discussion. sometimes its like reading those crazy fucking comments on social media. some of you need to seriously touch grass

Yup, true. AI has had a big impact in medicine in predicting health outcomes and mapping proteins.

AI isn't a black and white topic.

Era has a thing w/ judging a topic to just be "evil" or "good" which is not a healthy way of looking at things.
 

Clessidor

Member
Oct 30, 2017
260
AI has had a big impact in medicine in predicting health outcomes and mapping proteins.
The issue I see here, that the term AI in general is used quite universal. AI art and AI protein mapping are two quite completely different fields of AI usage and development. And this thread is mostly about image generating AI.
 

opticalmace

Member
Oct 27, 2017
4,030
The issue I see here, that the term AI in general is used quite universal. AI art and AI protein mapping are two quite completely different fields of AI usage and development. And this thread is mostly about image generating AI.
I think most have been focusing on the art aspects in this thread, but it's only natural for a topic about an application of AI to lead to discussion about AI as a whole in my opinion.
 

Soi-Fong

Member
Oct 29, 2017
1,482
Illinois
Reminder that it is always correct to mock and scorn AI and its users

The issue I see here, that the term AI in general is used quite universal. AI art and AI protein mapping are two quite completely different fields of AI usage and development. And this thread is mostly about image generating AI.

Yet, you have folks putting AI under one umbrella even though the thread is focused on Midjourney.
 

GokouD

Member
Oct 30, 2017
1,128
The attitude on here to AI really surprises me. Not so much the apprehension about where it could lead, that's understandable, but the refusal by some to even engage with or learn about it. When computers first became widespread they had to potential to totally change things, and not always in good ways. Some people shook their fists at the sky and refused to learn, and now they're sad old boomers who have to ask their grandchildren to fill out forms online for them. Do you really want to end up like that?
 
Dec 3, 2022
233
The attitude on here to AI really surprises me. Not so much the apprehension about where it could lead, that's understandable, but the refusal by some to even engage with or learn about it. When computers first became widespread they had to potential to totally change things, and not always in good ways. Some people shook their fists at the sky and refused to learn, and now they're sad old boomers who have to ask their grandchildren to fill out forms online for them. Do you really want to end up like that?


I'd prefer for my artwork and the artwork of others to not be stolen and used without consent to replace industries of workers with skills developed over a lifetime.

If this were a general thread about AI, sure. But this specific thread is about a tool made based on stolen work. And yet the people most affected by it are dismissed, banned, and told to "get with the times, old man!"

Gonna avoid/ignore AI threads here entirely because it's not worth it.
 
Dec 3, 2022
233
Out of curiosity, is there any good evidence they've stolen art? In the actual legal sense where they've broken specific copyrights under which images were uploaded?

Y'all had a thread a few weeks/couple months back about a model specifically trained on a friend of mine's art, using her own personal work as well as books she illustrated for Disney (consent/copyright that wasn't hers to give, as that's more of a work for hire thing).
 

ScoobsJoestar

Member
May 30, 2019
4,071
I think it's a huge mistake to focus the concern on the ethics of the training sets. And don't get me wrong, the ethics are questionable at best. It's just that the focus means that once it gets advanced enough to be trained on smaller, ethical sets (been reading some really interesting papers on that) that everyone will shrug and go "well what's the issue now?' as artists are out of jobs.

The issue that I see is less in the ethics and more on the pragmatic people losing jobs thing. Yeah the ethics is bad but the way training algorithms are being improved it should not be too long until it's doable. It will take a bit longer than the raw "the entire internet" approach will but it will come.

And I feel like if the discussion is focused on just the training sets we are just gonna be delaying the problem for an uncertain amount of time and more or less accepting the issues that will come. I think the focus should be on how to make sure the countless artists who are affected by this are still able to make a living - whether that be through legal restrictions on commercializing AI art, UBI or something else.
 

Orayn

Member
Oct 25, 2017
11,004
No is not. The main purpose of the machine gun was killing people.
People will build AI "machine guns" in the very near future. Someone will combine a system that's meant to gauge peoples' emotional state and serve them content accordingly (Facebook tried this), then add image/sound/video synth to send out huge volumes of highly targeted deepfake propaganda specifically meant to incite violence. That's just a low hanging fruit and the other uses we'll see will be far worse.

We have a lot to fear from implementations of AI that are designed to cause harm and it seems like very little is being done to prevent them from being developed or used.
 
Last edited:

Nothing Loud

Literally Cinderella
Member
Oct 25, 2017
9,998
now we have posts saying mock people and AI despite its massive contributions to many fields like medicine, along with other posts equating it to rape and some dark cabal of engineers murdering artists

this forum is going off the deep end again now with AI discussion. sometimes its like reading those crazy fucking comments on social media. some of you need to seriously touch grass

Yup, true. AI has had a big impact in medicine in predicting health outcomes and mapping proteins.

AI isn't a black and white topic.

Era has a thing w/ judging a topic to just be "evil" or "good" which is not a healthy way of looking at things.

Threads like these are exhausting. I don't even know where to begin, as a scientist that uses AI in medicine and a Data Science PhD student.

There's just so many angles to cover and so much naivety related to AI ethics, doomposting, people fearing what they don't understand.

Yes, AI is something we need to be thoughtful about implementing, there need to be safeguards in place and just like any automation invention that removes human labor from the production of a product, we need to as a society be thoughtful about how we redistribute human labor to new spaces that don't need or use AI as much. All of these are valid discussions. They are happening and being fiercely debated and resolved in the academic realm.

I just don't think most of these discussions are prone to happen in good faith here. The general Era public is too uneducated about AI for most people to have sufficiently comprehensive and thoughtful commentary except the valid fears by those it affects, and it's too easy to hot take/drive-by with your fearful opinion of AI and obliterate the atmosphere of discussion. Era isn't a scholarly portal. It's just a bunch of nerds (all of us) who signed up because video games, and here we are in off-topic trying to pull apart topics that are being dissected more skillfully and knowledgeably in academic circles, not here. At AI conferences and such.

That's not to invalidate the fears of artists and creators with things like this. It can be scary to see something like this happen especially when there's not enough being understood or done to slow it down.

I just think discussions like this should maybe at least be frontloaded and guided by basic background reading and experts on the subject plus those whom it affects most. Instead we just have chaos going on in these AI/art threads.

Here's some good background reading for this topic:

Nature.com said:
Bockting and van Dis are also concerned that increasingly these AI systems are owned by big tech companies. They want to make sure the technology is properly tested and verified by scientists. "This is also an opportunity because collaboration with big tech can of course, speed up processes," she adds.

Van Dis, Bockting and colleagues argued earlier this year for an urgent need to develop a set of 'living' guidelines to govern how AI and tools such as GPT-4 are used and developed. They are concerned that any legislation around AI technologies will struggle to keep up with the pace of development. Bockting and van Dis have convened an invitational summit at the University of Amsterdam on 11 April to discuss these concerns, with representatives from organizations including UNESCO's science-ethics committee, Organisation for Economic Co-operation and Development and the World Economic Forum.

Despite the concern, GPT-4 and its future iterations will shake up science, says White. "I think it's actually going to be a huge infrastructure change in science, almost like the internet was a big change," he says. It won't replace scientists, he adds, but could help with some tasks. "I think we're going to start realizing we can connect papers, data programmes, libraries that we use and computational work or even robotic experiments."

www.nature.com

GPT-4 is here: what scientists think

Researchers are excited about the AI — but many are frustrated that its underlying engineering is cloaked in secrecy.
www.nature.com

What ChatGPT and generative AI mean for science

Researchers are excited but apprehensive about the latest advances in artificial intelligence.
www.nature.com

Robo-writers: the rise and risks of language-generating AI

A remarkable AI can write like humans — but with no understanding of what it’s saying.
www.nature.com

Are ChatGPT and AlphaCode going to replace programmers?

OpenAI and DeepMind systems can now produce meaningful lines of code, but software engineers shouldn’t switch careers quite yet.
www.nature.com

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

Those who could be exploited by AI should be shaping its projects.
www.nature.com

AI can be sexist and racist — it’s time to make it fair

Computer scientists must identify sources of bias, de-bias training data and develop artificial-intelligence algorithms that are robust to skews in the data.

I posted a thread here that was really interesting regarding an AI use. It barely got responses.
www.resetera.com

Really cool science, or nightmare fuel? Brain Organoid Computing for Artificial Intelligence

So as a multi-disciplinary researcher I tend to lurk academic twitter and biorxiv and medrxiv…well this week there was a new preprint (means not yet peer reviewed) paper by schools like UF, Indiana University, Cincinnati, etc that I found positively fascinating but I’m sure many on here may have...

I just don't think Era is prepared for nuanced discussion regarding AI or ethics surrounding it. And those of us who do rely on AI to do our jobs better are being demonized or talked down to, and that's incredibly frustrating to see as well.
 

offtopic

Banned
Nov 21, 2017
2,694
Threads like these are exhausting. I don't even know where to begin, as a scientist that uses AI in medicine and a Data Science PhD student.

There's just so many angles to cover and so much naivety related to AI ethics, doomposting, people fearing what they don't understand.

Yes, AI is something we need to be thoughtful about implementing, there need to be safeguards in place and just like any automation invention that removes human labor from the production of a product, we need to as a society be thoughtful about how we redistribute human labor to new spaces that don't need or use AI as much. All of these are valid discussions. They are happening and being fiercely debated and resolved in the academic realm.

I just don't think most of these discussions are prone to happen in good faith here. The general Era public is too uneducated about AI for most people to have sufficiently comprehensive and thoughtful commentary except the valid fears by those it affects, and it's too easy to hot take/drive-by with your fearful opinion of AI and obliterate the atmosphere of discussion. Era isn't a scholarly portal. It's just a bunch of nerds (all of us) who signed up because video games, and here we are in off-topic trying to pull apart topics that are being dissected more skillfully and knowledgeably in academic circles, not here. At AI conferences and such.

That's not to invalidate the fears of artists and creators with things like this. It can be scary to see something like this happen especially when there's not enough being understood or done to slow it down.

I just think discussions like this should maybe at least be frontloaded and guided by basic background reading and experts on the subject plus those whom it affects most. Instead we just have chaos going on in these AI/art threads.

Here's some good background reading for this topic:



www.nature.com

GPT-4 is here: what scientists think

Researchers are excited about the AI — but many are frustrated that its underlying engineering is cloaked in secrecy.
www.nature.com

What ChatGPT and generative AI mean for science

Researchers are excited but apprehensive about the latest advances in artificial intelligence.
www.nature.com

Robo-writers: the rise and risks of language-generating AI

A remarkable AI can write like humans — but with no understanding of what it’s saying.
www.nature.com

Are ChatGPT and AlphaCode going to replace programmers?

OpenAI and DeepMind systems can now produce meaningful lines of code, but software engineers shouldn’t switch careers quite yet.
www.nature.com

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

Those who could be exploited by AI should be shaping its projects.
www.nature.com

AI can be sexist and racist — it’s time to make it fair

Computer scientists must identify sources of bias, de-bias training data and develop artificial-intelligence algorithms that are robust to skews in the data.

Here are some articles worth reading on AI ethics that are recent back to 2020.

I posted a thread here that was really interesting regarding an AI use. It barely got responses.
www.resetera.com

Really cool science, or nightmare fuel? Brain Organoid Computing for Artificial Intelligence

So as a multi-disciplinary researcher I tend to lurk academic twitter and biorxiv and medrxiv…well this week there was a new preprint (means not yet peer reviewed) paper by schools like UF, Indiana University, Cincinnati, etc that I found positively fascinating but I’m sure many on here may have...

I just don't think Era is prepared for nuanced discussion regarding AI or ethics surrounding it. And those of us who do rely on AI to do our jobs better are being demonized or talked down to, and that's incredibly frustrating to see as well.
This is where pretty much all nuanced discussion goes to die - I'd definitely look elsewhere (and understand your frustration).
 

InfiniteKing

Member
Oct 26, 2017
2,213
I'd prefer for my artwork and the artwork of others to not be stolen and used without consent to replace industries of workers with skills developed over a lifetime.

If this were a general thread about AI, sure. But this specific thread is about a tool made based on stolen work. And yet the people most affected by it are dismissed, banned, and told to "get with the times, old man!"

Gonna avoid/ignore AI threads here entirely because it's not worth it.
I just realized who got banned and this is just too much. This is a lot of bullshit. People want to go on with their "You cAnT StOp THis" and want to go on about great this big fucking grift is then go right ahead. I'm done.
 

Zeliard

Member
Jun 21, 2019
10,952
Threads like these are exhausting. I don't even know where to begin, as a scientist that uses AI in medicine and a Data Science PhD student.

There's just so many angles to cover and so much naivety related to AI ethics, doomposting, people fearing what they don't understand.

Yes, AI is something we need to be thoughtful about implementing, there need to be safeguards in place and just like any automation invention that removes human labor from the production of a product, we need to as a society be thoughtful about how we redistribute human labor to new spaces that don't need or use AI as much. All of these are valid discussions. They are happening and being fiercely debated and resolved in the academic realm.

I just don't think most of these discussions are prone to happen in good faith here. The general Era public is too uneducated about AI for most people to have sufficiently comprehensive and thoughtful commentary except the valid fears by those it affects, and it's too easy to hot take/drive-by with your fearful opinion of AI and obliterate the atmosphere of discussion. Era isn't a scholarly portal. It's just a bunch of nerds (all of us) who signed up because video games, and here we are in off-topic trying to pull apart topics that are being dissected more skillfully and knowledgeably in academic circles, not here. At AI conferences and such.

That's not to invalidate the fears of artists and creators with things like this. It can be scary to see something like this happen especially when there's not enough being understood or done to slow it down.

I just think discussions like this should maybe at least be frontloaded and guided by basic background reading and experts on the subject plus those whom it affects most. Instead we just have chaos going on in these AI/art threads.



www.nature.com

GPT-4 is here: what scientists think

Researchers are excited about the AI — but many are frustrated that its underlying engineering is cloaked in secrecy.
www.nature.com

What ChatGPT and generative AI mean for science

Researchers are excited but apprehensive about the latest advances in artificial intelligence.
www.nature.com

Robo-writers: the rise and risks of language-generating AI

A remarkable AI can write like humans — but with no understanding of what it’s saying.
www.nature.com

Are ChatGPT and AlphaCode going to replace programmers?

OpenAI and DeepMind systems can now produce meaningful lines of code, but software engineers shouldn’t switch careers quite yet.
www.nature.com

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

Those who could be exploited by AI should be shaping its projects.
www.nature.com

AI can be sexist and racist — it’s time to make it fair

Computer scientists must identify sources of bias, de-bias training data and develop artificial-intelligence algorithms that are robust to skews in the data.

Here are some articles worth reading on AI ethics that are recent back to 2020.

I posted a thread here that was really interesting regarding an AI use. It barely got responses. I just don't think Era is prepared for nuanced discussion regarding AI or ethics surrounding it. And those of us who do rely on AI to do our jobs better are being demonized or talked down to, and that's incredibly frustrating to see as well.

What do you make of the notion proposed earlier apparently by a couple of AI ethicists that the genie isn't quite out of the bottle yet?

Because from a layperson's perspective I don't see how that's the case, though I don't know what their argument is. Presumably it would be better to deal with the reality of it than getting angry (an understandable reaction certainly) over something that as far as I can tell is not only completely inevitable but will only become much more pronounced and ubiquitous over time.

I'm largely outside of this debate but I think it's critically important and neither side should be shut down. I've had my eyes opened from a lot of testimonials and arguments here from artists but I absolutely want to hear from those in AI fields as well. I don't see how there's any other way past this. We already know legislation will move like molasses.
 

Kendrid

Member
Oct 25, 2017
3,131
Chicago, IL
just like we've learned not to trust whatever the fuck anyone says on the internet.

I just looked at my town's FB group, there are still, right now today, people talking about litter boxes in schools and drag shows in schools, neither of which ever happened. There are people that believe everything they see on the Internet, it is scary.
 

Slatsunus

Member
Nov 2, 2017
3,219
Threads like these are exhausting. I don't even know where to begin, as a scientist that uses AI in medicine and a Data Science PhD student.

There's just so many angles to cover and so much naivety related to AI ethics, doomposting, people fearing what they don't understand.

Yes, AI is something we need to be thoughtful about implementing, there need to be safeguards in place and just like any automation invention that removes human labor from the production of a product, we need to as a society be thoughtful about how we redistribute human labor to new spaces that don't need or use AI as much. All of these are valid discussions. They are happening and being fiercely debated and resolved in the academic realm.

I just don't think most of these discussions are prone to happen in good faith here. The general Era public is too uneducated about AI for most people to have sufficiently comprehensive and thoughtful commentary except the valid fears by those it affects, and it's too easy to hot take/drive-by with your fearful opinion of AI and obliterate the atmosphere of discussion. Era isn't a scholarly portal. It's just a bunch of nerds (all of us) who signed up because video games, and here we are in off-topic trying to pull apart topics that are being dissected more skillfully and knowledgeably in academic circles, not here. At AI conferences and such.

That's not to invalidate the fears of artists and creators with things like this. It can be scary to see something like this happen especially when there's not enough being understood or done to slow it down.

I just think discussions like this should maybe at least be frontloaded and guided by basic background reading and experts on the subject plus those whom it affects most. Instead we just have chaos going on in these AI/art threads.

Here's some good background reading for this topic:



www.nature.com

GPT-4 is here: what scientists think

Researchers are excited about the AI — but many are frustrated that its underlying engineering is cloaked in secrecy.
www.nature.com

What ChatGPT and generative AI mean for science

Researchers are excited but apprehensive about the latest advances in artificial intelligence.
www.nature.com

Robo-writers: the rise and risks of language-generating AI

A remarkable AI can write like humans — but with no understanding of what it’s saying.
www.nature.com

Are ChatGPT and AlphaCode going to replace programmers?

OpenAI and DeepMind systems can now produce meaningful lines of code, but software engineers shouldn’t switch careers quite yet.
www.nature.com

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

Those who could be exploited by AI should be shaping its projects.
www.nature.com

AI can be sexist and racist — it’s time to make it fair

Computer scientists must identify sources of bias, de-bias training data and develop artificial-intelligence algorithms that are robust to skews in the data.

I posted a thread here that was really interesting regarding an AI use. It barely got responses.
www.resetera.com

Really cool science, or nightmare fuel? Brain Organoid Computing for Artificial Intelligence

So as a multi-disciplinary researcher I tend to lurk academic twitter and biorxiv and medrxiv…well this week there was a new preprint (means not yet peer reviewed) paper by schools like UF, Indiana University, Cincinnati, etc that I found positively fascinating but I’m sure many on here may have...

I just don't think Era is prepared for nuanced discussion regarding AI or ethics surrounding it. And those of us who do rely on AI to do our jobs better are being demonized or talked down to, and that's incredibly frustrating to see as well.

OpenAI's ceo is a doomsday prepped who thinks he is creating a god. He isn't a voice of reason at all. He literally thinks he's gonna end the world and manage to escape
 

Divvy

Teyvat Traveler
Member
Oct 25, 2017
5,931
I just realized who got banned and this is just too much. This is a lot of bullshit. People want to go on with their "You cAnT StOp THis" and want to go on about great this big fucking grift is then go right ahead. I'm done.
I want to know what they got banned for because I can't find the offending post
 

thoughtloop

Member
Apr 9, 2022
282
I saw a video about the rollout of electric scooters in major US cities a few years ago. How they just appeared en masse, practically overnight, in major cities that were totally unprepared and not warned. The CEOs of those scooter companies are in interviews saying they didn't care that laws and civic structures weren't ready for/allowing of scooters suddenly everywhere in a dense urban environment. I see very similar things happening with the rollout of AI art services. No one is stopping to take the time and develop standards and laws that will protect real people's talents, work, and livelihoods. And capitalism absolutely won't care in the end. This is why the lawsuits, like Getty Images is doing, are very important. It's incredible tech, but we're not prepared for it as a culture.

That said, I work in architecture, and we needed to fill a gallery space's renders with placeholder artwork for a major client. Multiple renders and an animation were on deck. In my field, people typically just grab (steal, really) images from Google, and no one asks questions. I'm the only person I know in my field who tells people to do due diligence and check the attributions and use clauses for the images they use that will appear in renders that we know the client will have published in papers and online. In this instance, I worked with our render department to leverage Midjourney v4 and OpenAI's DALL-E 2 to generate placeholder art assets. Client was thrilled, and even commented on the gallery being filled with artwork. The renders looked great. The current AI situation is forked up, but I like the idea of the image equivalent of "lorem ipsum" being in my toolbelt. It just needs more legislature, oversight, and transparency.