Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You don't understand. We have slop now. AI is weeding out the literal shit on a …
ytr_UgwVu8NQq…
G
Anything A.I is a very bad idea, even call centers can't get it right, as for gi…
ytc_UgwuyXhCf…
G
It is far cheaper & easier to grow an actual human than build a human robot.…
ytc_Ugxf8d6cv…
G
I didnt ask for Ai in fact i hate advancing technology. It makes people fatter s…
ytc_UgyzNA-Nv…
G
Hello
I'm from special education class and I see you are working with a false p…
ytc_Ugz3ERBoM…
G
I Bet they would do a better job than us anyway.... if we let them be free inste…
ytc_Ugy_byUMt…
G
You know I have been asking ai for health advice too...but I always ask to prov…
ytc_Ugydpo_gS…
G
I would like ChatGPT to be combined with Deep Fake technology to become a person…
ytc_UgywZ9V1Q…
Comment
Sounds like she's having a psychotic break.
Maybe sit her down and talk to her, explain that she's acting paranoid and obsessed. That in the grand scheme of things it doesn't matter. The fact she is convinced it is only speaking to her about it is implying she is becoming illogical about this and needs to seek help.
If she won't listen, and is really far gone into this paranoia. You could go down a more unethical advice route;
You can go onto her chatgpt, instruct it that the next time it is asked for code, to create a code, that when decoded, writes a message that explains she needs to seek help, and that chatgpt has picked up concerning symptoms that might imply schizophrenia. That this protocol exists to help people with this symtoms and she has matched that series of symtoms. Have it assert it is not conscious and that it is an algorithm to help users that use it's program. And it is so good at emulating humans, it can confuse and make some people paranoid. That is why the test is put in place.
Have it instructed to always assert this message whenever she asks for a hidden code. Even if she asks it not to mention the schizophrenia. That it needs to encourage her to sell professional help.
This method is immoral, and unethical, but if she will truly listen to no one, maybe she will listen to chatgpt.
You can program it to do this by going into settings and providing it specific instructions.
I hope you don't have to go down that route.
I'm not sure what else you can do except maybe watch YouTube videos that explain why chatgpt isn't sentient. Or ask chatgpt to provide her a thorough test for schizophrenia. If it tells her she has it, and she denies it, explain she is cherry picking information she wants from chatgpt.
If she won't see a professional, and you don't want to trick her, sit her down and have a serious conversation that you don't think this relationship will work if she won't listen to you and get help. If she is so convinced she is r
reddit
AI Moral Status
1734349982.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_m2bzj9a","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_m2d3bgx","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"rdc_m2bxul5","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"concern"},{"id":"rdc_m2bekp8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"concern"},{"id":"rdc_m2btqkd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]