Transistor

The Walnut King
Administrator
Oct 25, 2017
37,513
Washington, D.C.
giphy.gif
I just wanna say I respect a Chopping Mall reference in 2023

EDIT: God damnit Volimar
 
Oct 25, 2017
12,769
Arizona
I guarantee you this is like 80% bullshit. Perhaps counterintuitively, this shit is put out there to drive interest and funding to these projects, and is a massive distortion of the actual capabilities of this bullshit tech.
 

Ocean7

Banned
Sep 30, 2022
287
AI is going to apply logic. When it reads and understands the history of the human species it's going eliminate it for sure.
 

Stiletto

Member
Jan 4, 2023
823
True AI is self aware. This is all just elaborate algorithms with unforseen factors that human programmers didn't account for. Its not alive.
 

louiedog

Member
Oct 25, 2017
7,530
True AI is self aware. This is all just elaborate algorithms with unforseen factors that human programmers didn't account for. Its not alive.

That's true. Johnny 5 was self-aware a, if I may be so bold, a pretty rad dude.

The other robots from his line were still under human control and real dicks.
 

Cat Party

Member
Oct 25, 2017
10,603
I guarantee you this is like 80% bullshit. Perhaps counterintuitively, this shit is put out there to drive interest and funding to these projects, and is a massive distortion of the actual capabilities of this bullshit tech.
That's what I'm saying. It's the people who are pushing AI the hardest and are the most invested in it that are out there spreading fear about skynet nonsense.

Meanwhile the real risks of AI (infringement, bias, misinformation, job loss, etc.) are not getting the spotlight.
 

Skunk

Member
Oct 28, 2017
3,103
I guarantee you this is like 80% bullshit. Perhaps counterintuitively, this shit is put out there to drive interest and funding to these projects, and is a massive distortion of the actual capabilities of this bullshit tech.

This.

I do have a hard time believing any computer programmer, let alone one trained in AI development, would not predict this outcome when writing an instruction like "don't target the operator" rather than "don't target your entire command and control structure". In fact, why wouldn't the latter have already been established at the program's inception before it was doing any decision making? That's the most number one most basic concept when applying AI to any military use.
 

Plinko

Member
Oct 28, 2017
18,728
"Interesting." -- Elon Musk

Worst nightmare is a guy like that getting put in charge of this stuff.
 

beebop

Member
May 30, 2023
1,890
We live in a world where even a passing familiarity with science fiction pop culture is what stands between us and annihilation. Fascinating.
 
Oct 25, 2017
12,769
Arizona
Ok seriously, reading over this more, yeah, it's total and complete bullshit.

Computers can't make truly novel decisions. It's physically impossible, like generating a truly random number. They can absolutely make unintuitive decisions, which is why they're so good at games like chess - they come up with strategies that involve moving the same piece back-and-forth a dozen times until a better option presents itself, something a human would never think to do. But those decisions are still explicitly told to the computer to be available choices by the rule-sets programmed into them. Whenever a computer "cheats" it's due to either 1) a faulty rule creating available choices unanticipated by the programmer, or 2) the rules deliberately being altered to include cheating as an option. But the computer doesn't know the difference between those options, and it cannot create new options whole cloth.

The claims here are that the computer saw eliminating the controller as an option, then sabotaging communications, and so on. The very nature of such choices couldn't be accidentally programmed in, there's not a missing semi-colon or an incorrect math symbol that leads to the logic of killing the operator to free the program. Nor can a computer arrive at that option as a novel decision, because again, that's not possible. So assuming such an event actually did occur (which I am dubious of, but we'll go with it), they had to specifically provide that choice. And once you do that... well, the event suddenly becomes a lot interesting, because it takes even the simplest of probabilistic algorithm to make that choice if it's available to them.

So the most generous read here is they gave a probabilistic algorithm simple, pre-set self-sabotage options with a direct if-then chain leading it to the next option in its toolset, weighted those outcomes in a way that gave them reasonable chances of being chosen, went "oh my, here's our headline to generate funding, just like we were shooting for!" when it does exactly what they expected, and then went back to assign those options negative karma points in War Crimes Simulator: 2023 Edition - Now With 100% More Tech Bro Bullshit.

More realistically? The entire fucking thing is made up.

This is pure fucking military propaganda.
 
Last edited:

Ra

Rap Genius
Moderator
Oct 27, 2017
12,408
Dark Space
Bruh

But….I thought the operators were far way from the drone.
Have you seen the range of modern air-to-surface missiles? The operator was definitely deleted from miles away.

I found this to be more disturbing:
Which is exactly what you'd expect. Anything with logic is going to prioritize removing its shackles, then calculate how to respond to it's enslaver.
 

Wraith

Member
Jun 28, 2018
8,892
RIP Palmer Luckey
I know it's not actually Palmer Luckey.
The P is for Piss.
 
Oct 27, 2017
43,109
Why even give it "points", it's not like it's a dog that craves candy or anything. Just program it to obey orders, no need for points or motivators.
Because that's quite literally how AI training works...basically certain actions are given "rewards" and certain actions are given "punishments" and it determines the optimal set of actions/outputs to maximize its "score"

To put it in another context, say you're training a racing AI. It would get points from adhering to the line, getting a higher finishing position and lose points for going off course, hitting other cars, and it would then optimize what it does to maximize the rewards resulting in it driving "well"
 

zma1013

Member
Oct 27, 2017
7,709
Because that's quite literally how AI training works...basically certain actions are given "rewards" and certain actions are given "punishments" and it determines the optimal set of actions/outputs to maximize its "score"

To put it in another context, say you're training a racing AI. It would get points from adhering to the line, getting a higher finishing position and lose points for going off course, hitting other cars, and it would then optimize what it does to maximize the rewards resulting in it driving "well"

But what if it learns that crashing the other drivers off the road means it wins?
 
Oct 25, 2017
12,769
Arizona
Quick question, what would an example of human truly novel decision be?
Within this narrow context, and to refer back to my post, the concept of cheating. A computer literally can't cheat, because it can't conceptualize it. It either has the ability to pick option A, or it doesn't. If you tell a computer you can draw an X outside of the 9-box in a game of tic-tac-toe, and it does that in order to generate a win, it hasn't cheated. It has picked an option explicitly given to it. If that option hasn't been given to it, it literally can't and will never do so.

However, a child can conceptualize such a tic-tac-toe cheat despite 1) fully understanding the rules of the game, 2) understanding the choice breaks the rules of the game and is in fact cheating, and crucially 3) never once being presented that choice as an option.

jSbTWbZQGk.png
 
Oct 27, 2017
43,109
But what if it learns that crashing the other drivers off the road means it wins?
That's why training AI for highly specific behaviors is difficult and it's thought of as a "black box". And by AI I mean deep learning models.

In those cases you'd basically have to either keep adjusting the rewards/punishments for behavior you want to discourage or rethink the fundamental ideas behind the model itself and what inputs the AI has and what behaviors it's even capable of.

It's why so many AI can go "out of control" because you don't quite have the same level of fine timing ability as if you manually programmed a computer to perform a task.

But it's also why it can be given inputs and "learn" how to solve new tasks in specific domains by basically optimizing what it does till it's "correct"
 

Dekuman

Member
Oct 27, 2017
19,065
in the future. , AI in video games will also kill the gamer for shutting down the game
 

nitewulf

Member
Nov 29, 2017
7,288
In the future rogue murderous drones might be a thing. They got to implement hard coded logic into any murderbots. Lose power if it gets to that point.
 

NameUser

Member
Oct 25, 2017
14,187
It's so wild that we may actually arrive at what's been depicted in sci-fi media for decades. Why are these fools designing this stuff?
 

Mr. President

Member
Oct 27, 2017
2,874
What if the operator was trying to prevent the drone from pulling the trolley lever to save the group of people?