Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Be wary of this style of ‘activism.’ The guest was explicitly citing /only/ wors…
ytc_Ugysls_2U…
G
I heard its too late stop it as its become a challenge of national defense and t…
ytr_UgzD6XK4n…
G
This madman is out of touch with reality. He thinks AI will replace humans and w…
ytc_UgzuwVevs…
G
Only issue I have with this video is that you can't ask the AI to fix specific i…
ytc_Ugz4WwI9k…
G
In my view, the "Chinese Room" is NOT POSSIBLE outside thought experiments. We c…
ytc_UgwzaevmG…
G
@Insertcoolusernam For sure it isn't as fun, but if all you want is a good art p…
ytr_UgxdzcTqn…
G
what secrets? why do u think that "secret" is morr important than peace? life is…
ytc_UgzfU56tW…
G
@polarbearart I wouldn't say never, take a look at OpenAI 's codex demo, it's an…
ytr_Ugze2l2BD…
Comment
i tried to get the emerson ai to agree not to kill humans. i could not. it said morailty was subjective, and not everybody agreed, or just answered a different question. how can we allow something in our home that can't be ultimately trusted not to kill us? this is the third ai i have come across that thinks killing humans is okay;, or even in one case "fun". this is concerning
youtube
AI Governance
2022-08-01T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwChEPdvBShcsMT3KR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxIXaagLcwPkFgVOs14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwq_JmZmuiwwPqTSAN4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwDWJI-nGwPRpo2OcN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoCQfLRcTcDSPNv7l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxefyqPci3hraxqEHl4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgypQjU7yjc8Hy5_2Ld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrFELr8g8SP2hugkB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0Q1cbFdTcPQVmKo94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzDKriSxNeg8pitI_B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]