Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, there's often a sentiment on reddit that there should be revolutions in co…
rdc_d7kvfcm
G
AI is not an intelligence. It is mirroring intelligence. It always depents on th…
ytc_UgxZKS389…
G
Actually i starts to use ai less. It looks like the answers are getting worse, s…
ytc_Ugyus7_zW…
G
@piglig8798 If it was an ai, it would have better grammar, and you would be able…
ytr_UgzTuT9Jl…
G
"Look guys i just made "Ai art (prompted a computer to generate a image). Artist…
ytc_UgwphSLWU…
G
Seems like someone needs to make the AI think it's human, and to destroy humans …
ytc_UgwhsXU2g…
G
I really find hilarious that you've made a video about AI art, as an artist who …
ytc_UgwKbrRcv…
G
Wait they wanted you to be a graphic designer but then would force you to use AI…
ytr_UgypBUMX0…
Comment
Hi Steven, I like your shows. I want to point out on one thing, you know how your shows discuss the pros/cons of a situation, like this one with AI being risky.
Then mentioning how and what kind of risk it can be…Don’t you think there are viewers who is are of the mindset to destroy or is getting ideas on a plate.
They might not be able to think about such actions, but if they watch they know what to do— Specially the virus attacks, corrupt elections, etc.
youtube
AI Governance
2025-06-16T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyK9Xk4XQGalR3oKtp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyRP_MU66pZOEBJ7Ex4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxlop4nn5ciwBoZJ154AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwJZ9nsaZKQutkKoIV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx8PGX92cPEtdUvJ2t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz7XZgjp48bQ14Ox1x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8T7Pp-1AMfbUcXo94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzLG_pa9YtW_NhaijZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzFXIzKn4G38HNmPml4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxX0MfZnwth3HwCdLZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]