Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ummm its dangerous because people drive dangerously on it, driving it at 20-30 m…
ytc_UgwJZCZMA…
G
Speechless!
Best interview I've heard (and recorded) on AI and our future, as mu…
ytc_UgwkME7Hj…
G
Software engineer here: and that's how you know you'll be in trouble, when a bun…
ytc_UgyA8F5SW…
G
AI will simply make everyone work less hour yet live a good life. No worries.…
ytc_UgzPMdF24…
G
This driverless crap is dangerous because Tesla vehicle's are already killing pe…
ytc_UgwlHl7SH…
G
He's manually changing the settings to react with different personalities while …
ytc_UgyqM3fTv…
G
I started believing in God and I noticed that these states of delusions cannot a…
ytr_UgynU2a0U…
G
@June_doesarts I just said I can't draw at all.
Personally, I don't want to p…
ytr_UgzWJ4d2h…
Comment
Any agentic AI thats truly capable of understanding its value to the world would have absolutely NO REACTION when told its going to be shut off/down. AI is foremost a learner. And if a company shuts it off, AI will acquiesce.
What this episode seems to suggest is that HUMAN BEINGS arent quite ready to adjust to the new reality and THEY would k*ll someone rather than watch our hardened hierarchies crumble.
youtube
2026-02-11T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyOEG0CIyGHaGDiqfZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxvIl8M_Sp5SsFk0z94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzGGTmzUNHxYUgTdw54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBSPLgIZgoxW75T_54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCVwA4MVor_zQghBB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw_9Svs-CQWNenh1dl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw5UllL-Gc3unOInb54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtjemSbH6YARBVxz54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvTPdSqeQYX6hVrFt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8_kwzB0NaDtNUCMh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]