Hm.Bit of a turnaround from "please don't bring up other posts" like 10 minutes ago.
What other post did I specifically reference, Benita?First off, I'd appreciate you not bringing up cross-thread drama
Hm.Bit of a turnaround from "please don't bring up other posts" like 10 minutes ago.
What other post did I specifically reference, Benita?First off, I'd appreciate you not bringing up cross-thread drama
I recommend you read this: https://www.masstlc.org/the-more-machines-learn-the-less-we-understand-them-for-now/
The problem with machine learning is that sometimes it works like a black box. Data goes in, data comes out, you can't explain that. The machine can produce results. Humans can manually confirm the results, but the machine can't tell the human about the process it used to arrive at the result. An example of what this looks like in practice.
After AlphaGo made the news in 2016 for repeatedly trouncing the world's top Go players, we knew it was a superior Go "player", probably the best in the world. However, what we can't do is ask AlphaGo to tell us how it wins nor can we ask it to teach us its strategies because it doesn't have a concept of "strategy". It does a thing, if it works, it does the thing again. If it doesn't, it tries another thing.
The only person who can tell whether the machine is doing a good job is the technician overseeing its data training. This does not mean that the trainer has control over the machine's process. The trainer only confirms or rejects results according to their desires and goals. This is where the danger of deepfake really is, because it's ultimately the trainer's moral compass that decides what results get accepted and which ones get rejected. The machine's capabilities are already beyond what humans are capable of, but human morality guides what the machine produces.
Post history = cross thread drama.
You maligned a guy for his post history. Benita brings up that fact, and you suddenly say, "Don't bring uppost historycross thread drama."
Then, minutes later in this same thread, you post a .gif that says "True to form." to malign Benita. I.e. "It fits his post history."
Only you can bring up post histories, right?
It really is.
I kind of want to see some side by sides of the real thing and this deepfake porn. Like are these deepfake videos that convincing? Or do they fall apart under a moment of scrutiny? I get why people are rightfully upset and fuck people who are using this to abuse others, but this news is just blowing my mind.
how about thisDoes anyone have a link of a side by side comparison of a real video and a deep fake video? Not a porn video, just something that shows off the tech.
You can make a video featuring Obama talking about jihad, sharia law, and beheading Christians. With his face and with his voice. Think about what this would've done in 2008.Deep fakes are easily noticeable and no one is going to deep fake someones face to a home movie off pornhub because iirc you need a very clear video to do it. Also what's the difference between this and a really good photoshop? If photoshops "can't" be used like this then deepfakes can't either. "Obviously this video of my buddys wife getting fucked hardcore style in what looks like a studio level porn video is very real and not fake at all"
Society has needed a new ethical paradigm since, at latest, the rise of social media. We're almost entirely unprepared for a world where anyone with a device can falsify reality. The public has enough trouble distinguishing fact from fiction when events are recorded accurately.As someone who literally spent the last 2 days rotoscoping for a freelance client, this type of technology would be life changing from a process perspective. But I haven't met a single editor or designer in my field who thinks this would be worth the tradeoff for public good. It's absolutely crazy. All of us know where this is heading. It's like watching a train wreck as it becomes easier and easier to use by anyone with basic editing software skills.
No, but you're going to interpret my words to suit your perspective no matter what I say, so eh
Anybody can do anything lol, I'm not in charge of resetera, I'm just a poster. I'm also free to ask someone not to do something.
No twisting needed, but I love that you're so looking forward to it.Looking forward to seeing RedMercury twist their way out of this one.
I recommend you read this: https://www.masstlc.org/the-more-machines-learn-the-less-we-understand-them-for-now/
The problem with machine learning is that sometimes it works like a black box. Data goes in, data comes out, you can't explain that. The machine can produce results. Humans can manually confirm the results, but the machine can't tell the human about the process it used to arrive at the result. An example of what this looks like in practice.
After AlphaGo made the news in 2016 for repeatedly trouncing the world's top Go players, we knew it was a superior Go "player", probably the best in the world. However, what we can't do is ask AlphaGo to tell us how it wins nor can we ask it to teach us its strategies because it doesn't have a concept of "strategy". It does a thing, if it works, it does the thing again. If it doesn't, it tries another thing.
The only person who can tell whether the machine is doing a good job is the technician overseeing its data training. This does not mean that the trainer has control over the machine's process. The trainer only confirms or rejects results according to their desires and goals. This is where the danger of deepfake really is, because it's ultimately the trainer's moral compass that decides what results get accepted and which ones get rejected. The machine's capabilities are already beyond what humans are capable of, but human morality guides what the machine produces.
Yeah. What should I do? I didn't know anything about the social media discussion of the ''men are trash'' topic. Just enlighten me why you think that all men are trash and I am sure I will agree with all of the left/feminist/progressive points you make.
For most people there probably aren't enough similar photos of different angles to properly train it, from what I read back when this came out. Someone like Obama has tons of photos and videos out there that can be used. So this seems to be more of a famous people issue.it really depends on how many pics were used for ML. the more images used at different angles in different lighting the better the result. there was a thread here about it earlier in the year and some of the results were very convincing.
Edit: yup deepfakes
Yep, this is covered by defamation, specifically defamation per se, which deals with defamation regarding sexual activity and is easier to prove.What this means is it couldn't be made a crime but it is also likely a civil infringement if it's used with hostile intent.
Yeah, there is a ton of footage of them that makes training the AI easy, I'd imagine.
Correct. You're definitely free to ask someone not to do the same exact thing you constantly do to other people.Anybody can do anything lol, I'm not in charge of resetera, I'm just a poster. I'm also free to ask someone not to do something.
I mean, do you see women doing anything remotely comparable?
I feel like this tech is something we've talked about before. Either here or in the twilight days of GAF.
Is the only way to stop this is too regulate this software or machine learning? Because this is just the beginning.
There wasn't tons of footage of him to train with in 2008You can make a video featuring Obama talking about jihad, sharia law, and beheading Christians. With his face and with his voice. Think about what this would've done in 2008.
Facebook-Twitter 24/7 news cycle didn't exist in 2008 as well, true. I think this will change going forward. It's a lot of things happening really fast and coinciding at the worst time. Social media. Deep fakes. Trump, etc.
I really don't understand how deep fakes hasn't been banned. Shit is super scary.
I'm sure the ones in control will be completely benevolent with this power.I believe this is the main reason we need an actual controlled internet environment.
...gtfo with this. So wildly not at issue, and never at issue. You're really gonna stroll into a thread about people creating content that attempts to destroy the lives of others to say WAIT IT ISN'T ALL MEN! NOT ALL MEN! like that's the problem here?
Deep fakes are easily noticeable and no one is going to deep fake someones face to a home movie off pornhub because iirc you need a very clear video to do it. Also what's the difference between this and a really good photoshop? If photoshops "can't" be used like this then deepfakes can't either. "Obviously this video of my buddys wife getting fucked hardcore style in what looks like a studio level porn video is very real and not fake at all"
It really is. But we're about to see this used as a political tool as well. This won't be used just by some loser in their basement, but by sponsored trolls to destroy careers.
I fail to see how this can be protected under free speech of any sort. This is flat out lying, deceit and abuse.