Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Pizza girl here. Not that i think my job is 100% safe, but when im the ONLY pers…
ytc_UgyikG-dD…
G
I'm very concerned. But no one's going to pause. Maybe you could do some videos…
ytc_Ugy_6WR9W…
G
It probably would allready have automatically shut its self down for good and re…
ytr_Ugx2RKc9q…
G
What made me tune out of the AI talk was the hyper focus on the weird idea of qu…
ytc_UgxiUl3e0…
G
Just imagine when they get a job and they're like yeah I need to go on a break a…
ytc_UgziG2rfW…
G
50:34 i'm a welder. Hopefully ill make some money teaching all the people who mo…
ytc_UgzBsghbD…
G
We appreciate your feedback! The interaction between the presenter and the AI ro…
ytr_Ugzfsbixu…
G
Im afraid more about ai tech getting into hands of bad actors than the ai being …
ytr_Ugzc6ZODG…
Comment
So, what Elon is actually saying is, the super immoral human manipulators, that like to control the human populations of the world, to benefit and enrich only themselves, don't want to relinquish control to a smarter AI, they don't control? Oh, my heart bleeds for them so much, because they will lose their monopoly. Yes let's have more laws from the immoral human lawmakers to level the playing field.
youtube
AI Governance
2023-04-18T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzQHLafqjPfJ579Qt94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwx_zlVhyIzxaYToW14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwMFzVrZyEUZLYFd454AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzoWI73RsbZLzkCIVB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw84K6HbnjS6tv0ubp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxjEWY9s5gkM0am9nx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPX4d2gqnUW1e4vDR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyTMbkPPfAbImjMlUV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy97OAilOmcrutiapR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwkMzppZuxlcjfRtKF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}
]