Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Telsa are so safe that they are not included in many trials of self driving in E…
ytr_UgxiuWGvq…
G
good luck running your human taxi service for the same price as an automated one…
ytc_UggkbjzVx…
G
Super interesting.. my ET makes me think that as the USA became this insane mach…
ytc_UgxaCAW0H…
G
Thats all nice but wenn AI will destroy Humans, it will build full functional Se…
ytc_UgxStZTSC…
G
Elon Musk can afford to not think about it. He can just give his children a bill…
ytc_UgxKNxj2A…
G
AI customer service is really 💩 at the moment. You cannot get a solution from it…
ytc_UgyRziyZ6…
G
Real or not, my robot vacuum just quit its day job to become a boxer…
ytr_UgwDdw4x0…
G
You know who you are. Just because its becoming a "trend" does not mean that con…
ytc_Ugzv1JUy1…
Comment
29:40
I really like his quote here: "We're not used to thinking about things smarter than us."
Arrogance, greed, and underestimating the danger of AI will be what kills all human race.
youtube
AI Governance
2025-10-22T11:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyebo4BrVheoNfNaiV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx-aJrMns3RaKj9a0d4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwhjNSYSWCytZ4ZfO94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"frustration"},
{"id":"ytc_UgyRDQOFVMyX0w2sf4p4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzLPqbJKyLMvF70m1x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGzRc32csbkPYIF2t4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwvn9lJQoLoX5KRR-h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx8cc-c96Q6mQmO49V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyvDuzOwKlnR58Cuip4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQF0MxD0GUhl4Ah_R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]