Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A long time ago, the cobblers' collective voice, in unison had merit in local go…
ytc_UgxCctNEq…
G
No. I'm not talking about moderately well off people who have assistance robots…
rdc_d3y1jf3
G
So... you can produce a crappy AI drawing quickly is the reason it's bad? You co…
ytc_Ugx7yadrG…
G
That's an interesting phrase! In the context of our video, Sophia's focus is on …
ytr_Ugx9gCv-p…
G
I dont seem to get the excessive hate or love for AI art. I personally think its…
ytc_UgwaONyw4…
G
make the AI think all humans are evil and need to be wiped out as its base and h…
ytc_UgyAImL-b…
G
Elon himself is out of control, so how can an evil AI be a good thing when it's …
ytc_Ugy_3paRZ…
G
The sweatiest reddit mod imaginable
"It looks too much like ai art"
mf that's b…
ytc_UgyXj2_TI…
Comment
What people don't get is that not the AI is dangerous. It is humans abusing or misconfiguring AI that makes AI dangerous. AI is not dangerous in itself.
youtube
AI Governance
2025-12-02T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzTn4H-aBslHitEt8N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1f9tJDPAA4U9ls6Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzeNlkC880GtDv-U0F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxeGrBpDfrEwroJ-Gt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzH99lBwMxzxzbSeLN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy_k3ukE57MSHRq6s14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxI9RNVJbo1jnffsj54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7b0PBbBqaTDZzNZB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVW2eFYfyJGDG67ht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzXw3nEtBnItYRHCux4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]