It's like you saw the word "union" and started having an argument with me on a complete tangent about unions. You do you.You're a fool if you think the "pro union" stance of MS is some kind of serious change of heart. MS is anti union as any mega-corp can get.
You're a fool if you think the "pro union" stance of MS is some kind of serious change of heart. MS is anti union as any mega-corp can get.
It's like you saw the word "union" and started having an argument with me on a complete tangent about unions. You do you.
It's like you saw the word "union" and started having an argument with me on a complete tangent about unions. You do you.
Are you trolling now? HStallion 's response is a direct reply to the point you made.It's like you saw the word "union" and started having an argument with me on a complete tangent about unions. You do you.
Ok, so build a data set off of non-copyrighted material or pay artists to create new art to train a data set.The datasets are built off of and I mean literally built out of stealing from the work of others.
So why aren't they doing it? Majority of these companies aren't training the models from their own materials. I think right off the bat, only Level-5 is doing it.Ok, so build a data set off of non-copyrighted material or pay artists to create new art to train a data set.
Sure, there are issues of the current models being trained off of "stolen" materials, but what is stopping say Naughty Dog from training their own model off of work they paid their artists to create? Even if you don't train the models off of "stolen" materials, companies still have massive sets of internal materials and content to train AI off of.
How much text and images do you think the New York Times owns? They paid journalists and photographers for literally decades of content. They own it all. They can do what they want with that and it's not "stolen". They own it.
The part where the overall goal is to displace and replace artists is also things people take issue with.Ok, so build a data set off of non-copyrighted material or pay artists to create new art to train a data set.
Sure, there are issues of the current models being trained off of "stolen" materials, but what is stopping say Naughty Dog from training their own model off of work they paid their artists to create?
Ok, so build a data set off of non-copyrighted material or pay artists to create new art to train a data set.
Sure, there are issues of the current models being trained off of "stolen" materials, but what is stopping say Naughty Dog from training their own model off of work they paid their artists to create? Even if you don't train the models off of "stolen" materials, companies still have massive sets of internal materials and content to train AI off of.
How much text and images do you think the New York Times owns? They paid journalists and photographers for literally decades of content. They own it all. They can do what they want with that and it's not "stolen". They own it.
The training data for these systems is made up of billions of images. Even the fine-tuned models like the ones that Level 5 uses are likely relying on stuff like Stable Diffusion as a baseline. I think people are underestimating the amount of data required to make these systems work.The question is why aren't they already doing this instead of what's been going on since AI burst on the scene? Hmm...
Because thats not enought data.Ok, so build a data set off of non-copyrighted material or pay artists to create new art to train a data set.
The NYT is currently suing these companies for using their archives. Millions of articles, and the 2nd largest source of data for these systems. Yet that still only represents a tiny fraction of the data they needed.How much text and images do you think the New York Times owns? They paid journalists and photographers for literally decades of content. They own it all. They can do what they want with that and it's not "stolen". They own it.
The training data for these systems is made up of billions of images. Even the fine-tuned models like the ones that Level 5 uses are likely relying on stuff like Stable Diffusion as a baseline. I think people are underestimating the amount of data required to make these systems work.
The training data for these systems is made up of billions of images. Even the fine-tuned models like the ones that Level 5 uses are likely relying on stuff like Stable Diffusion as a baseline. I think people are underestimating the amount of data required to make these systems work.
Yes. I think that's the reason why AI developers are not willing to compromise on their ability to harness all of this data for free. If they give rights holders an inch, it could lead to either large amounts of training data being removed (thus significantly weakening the model) or licensing schemes with astromonic costs attached to them.So then this will be a serious long term issue since no one company will be able to actually provide that kind of data by themselves.
I completely agree! A technology that relies entirely on mass copyright theft should not be allowed to exist. No argument from me there.If it's not enough data, without using stolen artwork to train, then the technology simply isn't viable or sustainable and it shouldn't be developed.
Period.
How much data does it take for a human to learn?The training data for these systems is made up of billions of images. Even the fine-tuned models like the ones that Level 5 uses are likely relying on stuff like Stable Diffusion as a baseline. I think people are underestimating the amount of data required to make these systems work.
I mean you can tour an AI through a museum all you want. It's not going to help it improve.How much data does it take for a human to learn?
Sit the robot in a class room for 12 years, then 4 years of lectures, tour it through museums, bring it to movies, let it play video games.
Basically just let it learn how humans learn.
Would a lifetime of experiences be enough data to "train" the model without stealing copy written material?
This needs to be bolded, printed, and passed out to peopleBut that's the whole point of generative models. It's to steal work. That's the entire point. And it's being sold as democratisation of tools.
It's whole purpose is to remix existing content into "new" content to be monetised. It's industrialising a field that so far has been difficult to exploit on this scale.
Yay for capitalism.
It won't matter. This is just another instance of technology ultimately showing that a lot of people are dumb, delusional, self-absorbed, selfish. Now a lot of them really do think they are artists because of this or they are trolling and the end result is terrible no matter what. And the worst part of it is that the companies behind the tech don't care about art; its more about the user data they are gathering to further their work and the money.
People really do think you can just take someone's artwork for any reason, and the artist has no right to complain. It's fuckin' wild the entitlement people have.You can really tell what people think of artists and art when they put stolen in quotation marks multiple times 🙄
People really do think you can just take someone's artwork for any reason, and the artist has no right to complain. It's fuckin' wild the entitlement people have.
How much data does it take for a human to learn?
Sit the robot in a class room for 12 years, then 4 years of lectures, tour it through museums, bring it to movies, let it play video games.
Basically just let it learn how humans learn.
Would a lifetime of experiences be enough data to "train" the model without stealing copy written material?
We're kind of at that point now. AI can generate genuinely good art now, even without weird hands and such. I don't think your average art consumer can even tell anymore. It's just too convincing these days.
With how good AI art has become lately, I'm not surprised when I hear small time artists can't get commissions anymore.
First: Your answer is incomplete and not good
But we're going to have real consequences from this. People already deal with misinformation poorly, AI making false pictures and kiddie porn indistinguishable from real life (especially to the layman), or people using cases made up by AI chat bots is going to have SERIOUS problems that people are just shrugging about now. This is actually a disaster that more need to talk about and take seriously.
'm that's quite a depressing outlook, but that's how I see things. I try not to let it affect my mental health too significantly.
You won't let it affect your mental health is a useless answer when it isn't your career. People have had their shit stolen (and if the collective answer is to shurg, then we will be failing FAR more creatives in the future, and this extends far beyond simple visual arts.
If you trailed off giving your answer, then of course I won't care to hear it. Why should I?
It's just how it always is. Look at climate change as an example, everything's pretty fucked now. Under capitalism, if the market value of the harm caused is exceptional, people will turn a blind eye until it poses an existential threat. I'm that's quite a depressing outlook, but that's how I see things. I try not to let it affect my mental health too significantly.
So did Xbox remove the tweet or done anything about or nothing?
Yep.
We have to separate what is the right and what is most likely about to happen. I genuinely wonder if ethics or concern ever happened to stop progress, since Capitalism is in charge.
Human cloning is the only example I can think of. Are there any others?
Generative AI has some dark sides for sure, but they also are something big about to happen. I could even think most sensitive countries would be inclined to put a stop to this, but other countries wouldn't probably care and this is going to affect "most sensitive countries" decision too.
Apparently, there's no way back.
They deleted it the same day.So did Xbox remove the tweet or done anything about or nothing?
How much data does it take for a human to learn?
Sit the robot in a class room for 12 years, then 4 years of lectures, tour it through museums, bring it to movies, let it play video games.
Basically just let it learn how humans learn.
Would a lifetime of experiences be enough data to "train" the model without stealing copy written material?
Just want to say it will be impossible to stop this train, you can set up stable diffusion for free in less then 10 minutes and start generating images, the same way its very easy to train a model for example yourself with less than a hundred images, which is insane when you think about it and it will be impossible to ban as you can just run locally
also there are tons of databases available with stole art as nobody really strikes copyrights strikes as there are million of images in databases it would take insane time
"Basically just let it learn how humans learn."
It can't because that's not what Machine "learning" does, and not how it works. It can't create new ideas and it's not a human brain, and the whole point of a ton of the way it's being branded is to trick people like you have been when it's nothing of the sort.
Meaning they never really create anything on their own out of thin air but simply take existing works and rearrange it for lack of a better term?
Isn't it just basically "observing" a ton of data to built the model? From my understanding, the reason we don't allow it to create new ideas is for safety, accuracy and misinformation reasons, not that it couldn't actually do it."Basically just let it learn how humans learn."
It can't because that's not what Machine "learning" does, and not how it works. It can't create new ideas and it's not a human brain, and the whole point of a ton of the way it's being branded is to trick people like you have been when it's nothing of the sort.
The thing is all the megacorporations who aren't Microsoft are victims too. What are the odds they're profiting off of stolen Nintendo artwork, for example?the thought that copyright law protects something, anything, that isn't a giant corporation is laughable in and of itself
this requires a full retooling of it or, better yet, capitalism itself
speaking as an artist, i'm just waiting for the artificially generated sun to swallow me and my work whole
Isn't it just basically "observing" a ton of data to built the model? From my understanding, the reason we don't allow it to create new ideas is for safety, accuracy and misinformation reasons, not that it couldn't actually do it.
How do humans learn without observation, committing it to memory, then recalling, "creating" and iterating on it as "new" ideas?
they are all profiting from it so they'll likely turn a blind eyeThe thing is all the megacorporations who aren't Microsoft are victims too. What are the odds they're profiting off of stolen Nintendo artwork, for example?
Most AI people talk about these days just feels like hyper chat bots. Still impressive for a variety of reasons but the AI moniker does a fuckton of heavy lifting while feeling really unearned.
Bingo. AI drawing tools to work need as many references as possible - tagged correctly to boot - in order to deliver something that matches the instructions given.
Because of this, the most they can do is rearrange the content available on the database it uses, but never make something entirely unique and novel like a human would.