Nightfox wrote to jimmylogan <=-
Sorry - didn't mean to demand anything. I just meant the fact that someone says it gave false info doesn't mean it will ALWAYS give false info. The burdon is still on the user to verify output.
Yeah, that's definitely the case. And that's true about it not always giving false info. From what I understand, AI tends to be non-deterministic in that it won't always give the same output even
with the same question asked multiple times.
Mortar wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Fri Dec 26 2025 17:08:43
I've learned that part of getting the right info is to ask the right question, or ask it in the right way. :-)
Reminds me of the old computer addage, "garbage in, garbage out". Or
you could take the Steve Jobs approach: You're asking it wrong.
Mortar wrote to Nightfox <=-
Re: Re: ChatGPT Writing
By: Nightfox to jimmylogan on Fri Dec 26 2025 17:40:25
...it won't always give the same output even with the same question asked multiple times.
It it was truely AI, it would've said, "You've asked that three times. LEARN TO READ!"
Yep! I think that's part of the fuzzy logic. I've found that if I give it MORE context I get more specific answers to my current issue/condition.
| Sysop: | Bootdisk |
|---|---|
| Location: | Kannapolis, NC |
| Users: | 3 |
| Nodes: | 5 (0 / 5) |
| Uptime: | 98:14:56 |
| Calls: | 4 |
| Calls today: | 4 |
| Files: | 3 |
| U/L today: |
3 files (17,694K bytes) |
| D/L today: |
2 files (3,295K bytes) |
| Messages: | 3,305 |