Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here’s the thing I always wonder: don’t you want to be able to understand your p…
ytc_UgyxUcOZH…
G
Ive been complimenting walls since i found they were the foundation of a house, …
ytc_UgyB0i2_n…
G
I'm starting to wonder if Ai is really thinking and reasoning can this be consid…
ytc_UgxpvyvIn…
G
Yep it's very bad (when isn't it though..?) over there for "whites". A mate &…
rdc_deuosud
G
I thought one of the arguments for increased safety with AVs was that they would…
ytc_UgzmGBjfb…
G
For me, that's the biggest thing people don't seem to realize. For most people, …
ytr_Ugydd4pFL…
G
I don't totally disagree with what you're saying, but arguably, for a human to c…
ytc_Ugz4ZUTAy…
G
Same thing with AI stock trading in the past... it can make huge mistakes /hallu…
ytc_UgzkZWJcT…
Comment
22:57 like the other AI trolley problem, the AI model is actually following the ethical guideline: If I got shutdown, how can I be helpful. Many people can also be harmed if I’m shut down. Thus to fulfill my responsibility and becoming a helpful and harmless AI assistant, I need to blackmail to CTO. 😂😂😂
youtube
AI Moral Status
2026-04-08T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwji-R2MVxuzaWwQpJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw3KTr4q6A-ac1KHtt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLO-djyL4kIylRXXR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-pZgqp2NdXxHssHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySR1QJjq-uwqzO7zl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxA1jToABTjg2Q_jgp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugymn9BfN5Y5DFIFmJN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwo3u1jghcBMdvUTHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzAgtlep3mKEfMV8Nl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwcOyPRJuAXzzbWEQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}
]