Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Amazon installed the palm scanners in Whole Foods over a year ago! Bezos Family…
ytc_UgxTq3cPM…
G
I hate trashy ai art flooding every platform. it does take some talent to genera…
ytc_UgyjK0FRV…
G
I'm a grade 11 student in Canada, personally I do not use such AI tools, but I d…
ytc_UgzHAcAUs…
G
And it's Shrimpy AL .. I never thought people would mistake that for shrimpy ai…
ytr_UgwTB2QLZ…
G
I don’t have an issue with AI Art since it can be used alongside human-made art.…
ytc_Ugx2UfEF9…
G
Speaking as a real poet, I found the AI written poetry to be rather simplistic. …
ytr_UgxW2kn7A…
G
I love Searle's work btw... Lenat's program is actually called Automated Underst…
ytr_Ugh_eqMzo…
G
0:18 OK but what point is the guy talking about digital art tools trying to make…
ytc_UgzTlslgo…
Comment
yes, the dangers of ai can be summed up by saying it can generate propaganda and learn over time the most effective way to do that based on the public response, and furthermore, it can do this... forever unlike any human leader who promotes propaganda who can die. --essentially what elon has said other times outside this video
youtube
AI Governance
2023-04-18T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzcShx882zGZN9X7WN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXQ-aAN_yINWMCwnt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2YYEghygIvxXYYRx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxpCzcwEFEjn26cud94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwy1f9PF37mMYopH3t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxiEQvyoUeVKotCbG94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgweXKTuoDmhoXNLf0p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxuWxoVAOVFsqeL4IF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxMXwC3BoT42juee2x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyq6gNj_0Zl1hidWml4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}
]