Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t know what the problem is, Gemini probably just took training footage fro…
ytc_Ugy-ql3fW…
G
The video doesn't explain that UBI would be the only solution to prevent society…
ytc_Ugzrhl8Zp…
G
What if the robot had a virus and changed opponent. Those shots would've made th…
ytc_Ugxq2jP63…
G
@DoomDebates Brian vs Gemini 2.5 model that can reason through its thoughts. Whe…
ytr_UgxYinNoO…
G
AI is a Frankenstein Monster we're all going to be very VERY sorry for creating …
ytc_UgyNTDan7…
G
Really shows why this is a stupid idea should have 0 types of self driving…
ytc_UgwaHYDEB…
G
Using AI and saying you're an artist is like being a CEO because your dead Dad l…
ytc_Ugxf-Jbae…
G
An interviewer is often an semi autonomous agent whose primary job is to quickly…
ytc_UgzN8yyQn…
Comment
No reason to be scared. Having more information or more easily accessing information is just that, information. If it was certain A.I. could become conscious or, in a particular situation, act according to intuition rather than on programed information alone that would be a different matter.
youtube
AI Governance
2024-02-07T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwBg4veoiSj7S6ie6l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGLK2fzlp30io40UR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwg3Z5EDQ5bM2kmqop4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy2Wa4_fI4shpzNBdl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwNUhVEJU6NcTw0kit4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwqeS3PyommUmPXRIx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwqLZW4Jm_yUIhFDnJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwywdPFXwYeHSidxud4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgynG7w8jzBitQDl8ZV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzAELx8gbGZjhy_m5J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]