Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even this robot feels cringe. They actually care about humans. I respect this ro…
ytc_Ugy9OtVlQ…
G
You should see the oneyplays AI dungeon videos, idk if that specific program lea…
ytr_UgyHJymXh…
G
Asking an AI for its consent? Maybe someone should be asking me for my consent b…
ytc_Ugy4CDBDD…
G
The elite powers who profit from human conflict, discord, division, a lack of tr…
ytc_UgzrSXfQ9…
G
@stanvassilev LOL. It's not about just "numbers". We are talking about scientif…
ytr_UgyAk-g-p…
G
I will never get into a driverless vehicle. I will drive myself. My opinion is t…
ytc_UgzDOs667…
G
The premise at the start is wrong. This is pretty well studied at this point. LL…
ytc_Ugz6zOGD2…
G
Surely as we approach super intelligence, we could also get super intelligent AI…
ytc_Ugys17yEv…
Comment
- asks ChatGPT to say something nefarious -
- ChatGPT says something nefarious -
- “OMG ChatGPT said something nefarious!😱” -
Guns are not dangerous, HUMANS with guns are dangerous. AI is a tool and people that think it will “become alive and take over” are deluded.
youtube
AI Moral Status
2023-09-04T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzO1Gibo0fZm09jskh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWWDXo4UBjj287rPR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxF9w6v-NEDO55K42t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz4ujp9lH_t3kerzjJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzexe8W_ltG1PnExwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzkRJzrp5lnjnYopD14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx3QcswFUUHa-qagB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzdSnutiKUrp22Xgpl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzysiehd84Au2je3Ax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfQ5awCyXBsipN5ml4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]