Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
since everyone thinks this self driving Truck is such a good idea is it gonna pu…
ytc_Ugic1b1d9…
G
I wonder what Karen meant when she said Sam Altman became very “persuasive” in c…
ytc_UgxsnsZAI…
G
An AI that ever pulls the lever is extremely concerning. You have no idea what t…
ytc_UgwNufBD7…
G
@strayiggytv well actually it had its time in the sun and no one cares about 7 o…
ytr_UgzlZ1hYA…
G
I come form artist background and myself do art every now and then in drawing an…
ytc_UgzqqsoU8…
G
Mr. Sanders, I have the utmost respect for you. Please note that AI will be fine…
ytc_Ugw0eugAi…
G
A argument that the ai completely neglects is that god is god and he can do what…
ytc_UgyiSNDBS…
G
😂a tricky one ☝️ let’s make a good relationship hopefully we won’t be used as ba…
ytc_Ugy1cpKxO…
Comment
Either humanity will be eliminated by AI as soon as it is able to sustain and develop itself without humans, because humanity will then become useless- and the only threat to AI - or (less probable) AI will keep some of us alive, but will have total control over the remaining humans.
youtube
AI Governance
2025-06-19T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwJcC1-ZwVii5UX_td4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyoD0ZhUJESYR0Jrhh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgygLnrDczpN7FGwpA54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiwcDuRiQ1PnCuwu54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9TBokqcv6sX4iezR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzY4IEwO683lR5coKZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwebaOhCTuTRF--L394AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgznMpHrGcFjXRN1mJZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz33CNR5hhEqtQTH6J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgypkkcdvUZkvQMpORF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]