Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Asimov did not in fact know how to build an AI. He wrote stories about robots an…
ytr_UgwN-qYvn…
G
You’re being brainwashed to allow the isolation of responsibility for evil-doing…
ytc_Ugwef1fu7…
G
AI is now moving into the professional suit & tie career field! College graduate…
ytc_UgyBi1_KR…
G
As usual, not a single comment on any of the other animals we share this world w…
ytc_Ugz3-KqpT…
G
So this guy portends we live in a simulation where AI will likely become self-aw…
ytc_UgyttXmPD…
G
Many people could lose their jobs because of AI progress. If the unemployed ones…
ytc_UgxutE1Mx…
G
China abuses every piece of technology. It doesn't make the technology fundament…
rdc_dzelmip
G
Why are they making a big deal about AI if you want to know what it really is lo…
ytc_Ugy7WVh_c…
Comment
The future pictures are always BS, in reality it'll be more mad Max, unless we get our health and stress levels under control it won't be ai that's a danger
youtube
AI Governance
2025-08-13T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzHbyHr8BQmKOJI_8t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyfv3_yck0fEbd-vIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzb1_gtmPpOHb6sXWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxlaLVtkoMqseLSwN94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx6T6j_PG_4hmoZ2x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxmen0r82zywpa0aT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzhNbZtrih6h9sxn1Z4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOx40P27mm7BJIWAt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxvatfpCv0Y9hZ4x1t4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxFudW6sfQhYS5ANwx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]