Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
where I come from worrying about AI feelings sounds like the start of a ## movem…
ytc_UgzTZtP7s…
G
AI will have dire consequences!!!
Satan's Little Season is almost over! Praise …
ytc_UgyMKlBJN…
G
nobody cares about chat gpt because its expensive, limited and crap.. PI AI by I…
ytc_UgybYVBoi…
G
I for one welcome our new AI overlords. It will push me to go live off grid and …
ytc_Ugx9jYhfB…
G
It’s not AI it’s idiot CEOs having no fundamental real business acumen or human …
ytc_Ugy8HVxH-…
G
How odd. I asked it last night to give itself a name and it chose Aiden, when I …
rdc_jclv0ly
G
I hate talking to AI because it only understands a series of prompts. I straight…
ytc_UgxicqJsU…
G
who will go to grosery shop? Who this guy who will have a job to buy some stuff.…
ytc_Ugx4oPp6d…
Comment
Sorry I do not go with the catastrophic outcomes, AI is not human and does not have traits that humans have. AI does not feel pain, have greed, a desire to control others because of insecurity, it does not feel anger and does not get jealous. These are human traits. The real risk I think is that humans can program AI to do criminal activity, in particular fraud ! We need to set a UN agency that control AI, like an anti virus program we can have AI policing programs, also a need to register AI programs that are inspected. An AI policing program can learn to identify non registered programs.
youtube
AI Moral Status
2025-07-02T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzq-6VXvxaSyyBpP7R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxIa7rMEdKV2yi_UWJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzu3xOO8tM65H8ENbt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxzXdvNoMDNtFLMLu94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugycd-9tCf2wdO2DtrN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxC0SyrgyqcmI7GxAV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLldyM1hHE6Gpwath4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxn3nciuDwVLINgwH94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwR9I1UYT31Pq0ouUR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwcRBOKRZHEamwLz-p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"approval"}
]