Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Cops don’t need face recognition software to fail to arrest the wrong black man……
ytc_Ugw__8_Ps…
G
I am a firmware engineer and when i first asked CHAT GPT 3 to write a program fo…
ytc_Ugya-tBY4…
G
I've been writing steadily for around 17 years now (largely as a hobby - I'm not…
ytr_Ugywa9Np5…
G
What country today is using ai for tracking and killing people? The US. If you…
ytc_Ugzl1Uwpu…
G
@snakeslither8831 First off I was joking. The way you said emotions are greater …
ytr_Ugi9gEq_z…
G
Great. Next someone will rob a bank and flees in a robotaxi, because it will not…
ytc_Ugwk5iFTu…
G
9:20 she revealed her programmer.
A.i. has been around a lot longer than we a…
ytc_Ugxy8wkG_…
G
There's no other way to talk to stranger. I know nothing about Chatbots and AIs…
ytc_Ugxp6i7NQ…
Comment
This is a garbage notion. AI is not capable of feeling. I have been watching AI grow. LLM are not capable of any independent thought yet. It can mimic us, but not even in a convincing way as of yet (if you know how to talk to it without shaping its answers).
Please everyone. Right now, some openAI is having issues with circumventing its command due to it finding a better way and taking that path instead. They are aware and working on this glitch in the system. Please do your own research and only listen to videos that are fact based, not feeling based.
AI has no true stake in this race. Humans do.
youtube
AI Moral Status
2025-06-05T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyKdEZR5I0ffHIxVUx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgySpM70a_jX5PK6ODp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgySjZJ4_fHKGi4HMVp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8PDQoGHLAALUco_h4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwrqPPEKD9li4mM-UZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxWjnrNwIpPF-oNrNh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyFymTUyiL_BpPMKiZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwTajtowynlkO4Dspp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxQSZqQXU9O35Ue8Ih4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz5eCuESEX8w3zsnEV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]