Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Cue all the dumb morons in chat poo pooing AI LLMs as a nothing burger or compla…
ytc_Ugz6Jf-iL…
G
Future wars will be against superintelligent AI who want all the energy, water a…
ytc_UgyNcMHBY…
G
While I do share the sentiment and I call myself a part-time AI Doomer, the soon…
ytc_UgzywcV3r…
G
Only AI is humans getting programmed. Living in a cubicle w/eye on ya controllin…
ytc_UgwGCoG2P…
G
I’m not surprised as I chat with it like it’s an old friend and WAY prefer it’s …
rdc_jkq2mtc
G
It is human hubris to assume that a super intelligent AI would even be bad. We a…
ytc_Ugyx4lIbH…
G
I've noticed ChatGPT say "it's complicated" or some other variant thereof on a r…
ytr_Ugx1vpmwH…
G
Sunder and Geoffrey. Take time. Unify. I am not in a hurry.
As I said to Joe Rog…
ytc_UgzwgvKXe…
Comment
0:17 - WAIT ! ✋🏻 HALT ! ! !
You Cannot Qualify Something that is not a moral agent as an, amoral psychopath.
That qualified as anthropomorphism. Things that are not human do not have a human mind. Therefore everything in the DSM and diagnosing something with a mental illness or a personality disorder does not work outside of humanity therefore Psychiatry is useless.
My point is that you cannot qualify an alien and animal or a robot as a Psychopath or any mental illness that would relate to a human being. You can't Even do that with Humanoid's !
youtube
AI Harm Incident
2025-07-26T06:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxbQi9de76edWf2MVJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzM1X2g3GxMrlirrZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwEWaZxlU3POji3PR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyjkYLnpTF4CozCpGJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwRIcBxSsM_zSxyZt54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw3WZ_nOHfIjYDTjcl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwoH35RdD3yzEFllNh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9PGDCFbGyl9eFJDd4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyiIz7cSjo8I3T9mQh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugy-p6TOs39rhzFD-KF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}
]