Tygre

Member
Oct 25, 2017
11,450
Chesire, UK
Perhaps this upcoming over-reliance on things that break or could be used maliciously has a few downsides.

Yeah, it's worrying.

I mean this is the first time human society has ever relied on something that might break or be used maliciously.

Everything we've based our society around up to now has been flawless, with no possibility of failure and totally un-usable by any bad actors, so this is a big change.
 
Feb 15, 2023
4,313
Yeah, it's worrying.

I mean this is the first time human society has ever relied on something that might break or be used maliciously.

Everything we've based our society around up to now has been flawless, with no possibility of failure and totally un-usable by any bad actors, so this is a big change.

I was being flippant, but of course if we put all our faith in something we're not specifically trained in or has human experience, and rely on it which some people in power already are, then perhaps that over-reliance that some are insisting on is unwise.

Context is always important, as always. Particularly in this case with transformative technology. And we should really learn about our mistakes, as we should've done with social media and placing our faith in that. We had the British government relying on Whatsapp and that didn't turn out too well. The cycle continues.
 
Last edited:

Rolodzeo

Member
Nov 10, 2017
3,677
Spain, EU
This is my favorite one

GG0vXwpWgAA4g1Y


It's either romantic longing and yearning or Eldritch madness
That's... scary.
 
Oct 27, 2017
3,707
I was trying to get it to do some really basic math yesterday and it could not for the life of it give me an accurate result. It was always off.
The issue outlined in the OP is completely unrelated to its ability to do maths.

ChatGPT operates using a language model. You should not, under any circumstances, be using it to do maths or calculations. How large language models work make them fundamentally unsuitable to do maths problems. While developer plugins can make this more reliable, relying on LLMs for maths problems is a fundamental misuse of LLMs.
 

gozu

Banned
Oct 27, 2017
10,442
America
In this case, the bug was in the step where the model chooses these numbers. Akin to being lost in translation, the model chose slightly wrong numbers, which produced word sequences that made no sense. More technically, inference kernels produced incorrect results when used in certain GPU configurations.

Did nVidia/Azure release new GPU configurations recently? Or was this a coding mistake by openAI? What % of answers were affected by the bug?

The issue outlined in the OP is completely unrelated to its ability to do maths.

ChatGPT operates using a language model. You should not, under any circumstances, be using it to do maths or calculations. How large language models work make them fundamentally unsuitable to do maths problems. While developer plugins can make this more reliable, relying on LLMs for maths problems is a fundamental misuse of LLMs.

Correct. The current fix is to make the LLM either make an API call to something that CAN do math (Wolfram Alpha?) or write python code (which can do math) that outputs the correct result, though this is less reliable because ChatGPT often writes broken code and can need 3 or more drafts with user feedback to finally write something that works.

So yeah, using the pure LLM to do math is a no-no, but if you can use plugins/api calls/write code then maybe it's ok. Much progress needs to be made still.
 

Heliex

Member
Nov 2, 2017
3,197
The issue outlined in the OP is completely unrelated to its ability to do maths.

ChatGPT operates using a language model. You should not, under any circumstances, be using it to do maths or calculations. How large language models work make them fundamentally unsuitable to do maths problems. While developer plugins can make this more reliable, relying on LLMs for maths problems is a fundamental misuse of LLMs.
Well I'm gonna keep doing it cuz it's useful!
 
Oct 27, 2017
3,707
Well I'm gonna keep doing it cuz it's useful!
I mean, you can do it if you want nobody is going to stop you, in the same way you can try to use a calculator as an auto-correct or Grammerly for art inspiration, but you shouldn't be surprised that it won't give accurate results. You're using the complete wrong tool for the job.
 
Feb 15, 2023
4,313
I mean, you can do it if you want, in the same way you can try to use a calculator as an auto-correct or Grammerly for art inspiration, but you shouldn't be surprised that it won't give accurate results. You're using the complete wrong tool for the job.

This conversation seems to be proving my point somewhat!

edit - after some thought about all this ChatGPT LLM stuff, it does make me think of the Horizon Post Office scandal in the UK. In the 90s, the Post Office here instituted not one but two flawed computer systems, both of which had a glitch that made it look like the postmasters responsible for often quite small Post Offices around the country had money disappearing from their accounts.

These postmasters were accused of stealing, persecuted, disbelieved, fired and discriminated against in their communities. Many of them - I think the majority - were minorities.

Over 25 years later, after a TV series brought it to national attention despite it being known about by government for a decade, only now those postmasters are receiving compensation for years of trauma.

The issue was that managers thought that the program, Horizon, and its predecessor, were completely infallible. Fujitsu has basically covered it up, and successive people ignored the problem as it piled up. More and more postmasters were sucked in to this trap, but the view was it was more likely they were stealing rather than something being wrong with the software.

Now we have a situation where a very new program, ChatGPT, with numerous flaws, is being relied upon for all various manners of tasks.

I do wonder if in future, due to poor management and an unwillingness to understand the vagaries of the technology, that LLM's will have their own version of a Horizon scandal. Sure, this is all essentially human error, but I think placing trust in these technologies, particular in the way they get pushed by their billionaire founders, is something that'll eventually end with some kind of disastrous "unforeseen" consequence on a scale similar to Horizon.

Similarly to Horizon, I predict most of these issues will hit positions where minorities will be in the majority of the people affected.
 
Last edited:

Marmoka

Member
Oct 27, 2017
5,282
How the hell are you guys getting those outputs? I have already tested it and seems to be working well
 
Oct 27, 2017
3,707
Now we have a situation where a very new program, ChatGPT, with numerous flaws, is being relied upon for all various manners of tasks.

I do wonder if in future, due to poor management and an unwillingness to understand the vagaries of the technology, that LLM's will have their own version of a Horizon scandal. Sure, this is all essentially human error, but I think placing trust in these technologies, particular in the way they get pushed by their billionaire founders, is something that'll eventually end with some kind of disastrous "unforeseen" consequence on a scale similar to Horizon.

Similarly to Horizon, I predict most of these issues will hit positions where minorities will be in the majority of the people affected.
As Servbot highlights with the below:
Practically all tech can break or be used maliciously

None of what you highlight is particularly specific to ChatGPT. What you're describing is true of any technology, where "poor management and an unwillingness to understand the vagaries of the technology" resulting in misappropriate usage can have significant (and dangerous) detrimental consequences.

Almost any aspect a company introduces carries with it significant risk and requires those implementing the system to inform themselves on appropriate usage. Whether it's customer service platforms, fraud detection systems, point of sale systems, recommendation systems, marketing systems, HRM systems, CRM systems, or even individual technologies to integrate into business flows such as Looker, Excel, Alteryx, specific IDEs, etc. or just general adjustments to workflows in general (e.g. using Google Search), or even just computers in general (the infamous Patriot missile system failure) can result in catastrophic consequences if implemented by people who don't bother to understand their limitations, best practices, and appropriate use cases.

There's no point speculating 'if' there'll be a scandal attributed to AI, there are already significant scandals attributed to misuses of Machine Learning e.g. the infamous Amazon Recruitment Tool scandal, and there will be many more. The key is exactly as you highlight, it is fundamentally a human error, driven by incompetence and a complete failure to implement tools (or practices around those tools) appropriately. Building an AI driven feature and failing to account for that (and the limitations it brings, the need for safety fallbacks, etc.) is a fundamentally human problem, little different to somebody implementing any other mathematical model and failing to account for the limitations of that mathematical model. Particularly in the context of the thread, where the GPT service was disrupted due to errors which reached a production context and the potential knock-on effect on reliant applications, this is really no different to any other web application dependent on external services (e.g. AWS outages, Cloudflare outages, Azure outages, GCP outages, etc.) and service outage/bugs/disruption is something you should be considering.
 
Last edited:
Feb 15, 2023
4,313
As Servbot highlights with the below:


None of what you highlight is particularly specific to ChatGPT. What you're describing is true of any technology, where "poor management and an unwillingness to understand the vagaries of the technology" resulting in misappropriate usage can have significant (and dangerous) detrimental consequences.

Almost any aspect a company introduces carries with it significant risk and requires those implementing the system to inform themselves on appropriate usage. Whether it's customer service platforms, fraud detection systems, point of sale systems, recommendation systems, marketing systems, HRM systems, CRM systems, or even individual technologies to integrate into business flows such as Looker, Excel, Alteryx, specific IDEs, etc. or just general adjustments to workflows in general (e.g. using Google Search), or even just computers in general (the infamous Patriot missile system failure) can result in catastrophic consequences if implemented by people who don't bother to understand their limitations, best practices, and appropriate use cases.

There's no point speculating 'if' there'll be a scandal attributed to AI, there are already significant scandals attributed to misuses of Machine Learning e.g. the infamous Amazon Recruitment Tool scandal, and there will be many more. The key is exactly as you highlight, it is fundamentally a human error, driven by incompetence and a complete failure to implement tools (or practices around those tools) appropriately. Building an AI driven feature and failing to account for that (and the limitations it brings, the need for safety fallbacks, etc.) is a fundamentally human problem, little different to somebody implementing any other mathematical model and failing to account for the limitations of that mathematical model. Particularly in the context of the thread, where the GPT service was disrupted due to errors which reached a production context and the potential knock-on effect on reliant applications, this is really no different to any other web application dependent on external services (e.g. AWS outages, Cloudflare outages, Azure outages, GCP outages, etc.) and service outage/bugs/disruption is something you should be considering.

I'm not disagreeing with you at all, but AI is currently in a push that's so "all in" that it's not hard to compare it to other examples of catastrophic tech failure, such as Horizon. Horizon is a warning shot on the reliance on technology that was left to rot for well over a decade. Yes, it's also human competence to recognise the error and act on it, but imagine the Horizon issue on a far wider scale. This isn't fearmongering about AI, but the observation of a failure to recognise that the mass leaning on early technologies in LLM and AI so early on in its infancy creates the potential for even larger failures than simpler systems. Then there's also the threat of mass redundancy of human experience - which should remain essential but for whatever reason isn't - which we're already seeing in the generative AI field. It's less the threat of the tech to replace artists and creatives in media, but an actual, active push to do so, without realising that the experience lost will impact future creative endeavours.

There's also the fact that, once more, that we're ever more reliant on either singular, gigantic corporate behemoths or so-called billionaire tech luminaries.

My observation is not just about simple outages, but more a mass shift in how we use technology to replace human experience and adaptation. CEO's and managers already push to dismiss failures and bugs as it is - hence my example of Horizon - and we know that over-complication of systems creates more potential for failure. it's why many important fields still use fairly simple technology - libraries, nuclear power stations, aircraft - because the experts in those fields know they work, and they know to recognise failure. By placing the potential for failure of important systems in technology owned and run by others you increase the risk of failure.

It's not that AI is some all-destroying behemoth, it's the humans that are striving to drive it that way. It's that we're relying on technology almost too much in some areas already, and a push to do so even more without adapting to the consequences or recognising that it's not a be-all-and-end-all solution, just seems earth-shatteringly naive. I feel it pays to not forget, these systems exist mainly in the pursuit of profit and push out human roles, and very rarely are created with a goal in complementing them.

It might not be an original observation, and has been argued to death already here, but given I was being pulled up on a flippant remark, I may as well elaborate.
 

Jedi2016

Member
Oct 27, 2017
16,334
ChatGPT operates using a language model. You should not, under any circumstances, be using it to do maths or calculations. How large language models work make them fundamentally unsuitable to do maths problems. While developer plugins can make this more reliable, relying on LLMs for maths problems is a fundamental misuse of LLMs.
That's exactly the problem with AI right now. It's not directly about what AI can and can't do, it's what people believe it can do. I wouldn't be surprised if 90% of the people that use ChatGPT honestly believe that it's thinking and really computing the answer, rather than just putting one word in front of another.
 

The Albatross

Member
Oct 25, 2017
39,536
SCISSORS

..

I gotta get in on this shit. Use GPT4 a decent amount at work. Time to write some unit tests baby
 
Feb 15, 2023
4,313
That's exactly the problem with AI right now. It's not directly about what AI can and can't do, it's what people believe it can do. I wouldn't be surprised if 90% of the people that use ChatGPT honestly believe that it's thinking and really computing the answer, rather than just putting one word in front of another.

Man I wish I was better at brevity, because 👆 this.