Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nah. Doctors and lawyers are already overworked. There's not a shortage of patie…
rdc_fct0f5o
G
Im all set for a mini robot pet that i can raise he can run around ill give him …
ytc_Ugwn75618…
G
My friend Billie found out about my ai chats and now she says she will tell my p…
ytc_UgxWTedoA…
G
I hope AI can’t crawl into a 40 something degree Celsius ceiling space to run c…
rdc_j6gk2tg
G
This people have done AI and now they say that AI it's dangerous? Are they scare…
ytc_UgwWSbe30…
G
Can you explain the war on carbon? CO2 is plant food, we live in a carbon based…
ytc_UgwemPcpP…
G
3: can we just stop with the ai thing its gone out of hand and these stupid redi…
ytc_UgwmxW9bk…
G
99% of jobs , only if we allow it to happen ! It’s not inevitable at all. . Th…
ytc_Ugz6m2Z0x…
Comment
I haven't watched the whole video yet but everyone misses a big risk. What if Ai gets seemingly good enough to replace most jobs, all industries pivot and it turns out Ai is a disaster that turns to garbage, perhaps it trains itself on itself and becomes useless and all the industries implode, collapsing whatever society we have at that point even if we get safety right with no time or plan to pivot back with ease it was to get there with the future benefit. Human ingenuity in disasters is perhaps enough but that would be one massive global disaster.
youtube
AI Governance
2025-09-05T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwtMZ498dGVfo_bcHd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwn4LMAaKJFfknwwI54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy736Rkwl_EJBQ7tyB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQ_XSNGfoAHITRtKV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyUAZPnKQPOmFODa_94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwyyrIbaG3NGt7l-Q94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugysq7uIQRYlKcYFRO94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQAc0zUuP3Vz60qzZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzKf2QovsHBfISLLsp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx9QPKaz0BPMgRvwT14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]