Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So according to you having a kill-switch in a robot is also unethical if its sel…
ytr_UgwO4HrA4…
G
AI "artists" defend it so much because its how they make their gooner material. …
ytc_Ugx7UuGS0…
G
Seems super AI might want to stop humans from doing a nuclear armageddon, which …
ytc_Ugxp927l8…
G
I wonder if this is why we don't see any other intelligence in the galaxy? The A…
ytc_UgxV0TDLd…
G
The section on "Exploitative Practices In Global South" was just insane! Cannot …
ytc_UgxBB0QYT…
G
I’m almost scared to finish this because I’ve always said AI is dangerous. I’ve …
ytc_Ugxy179an…
G
The trump administration is planning to phase out human intelligence in favor of…
ytc_UgzYB_7Ba…
G
The holy crusade against AI art is absurdly laughable to me, but I respect your …
ytc_Ugw6t1ZTW…
Comment
I am a psychologist and was disturbed by how the interviewee talked about human consciousness and AI consciousness being the same, except the fact that we are made from nature and AI is not. He also appears extremely callous towards the human experience as a whole. The fact that he thinks empathy is purely cognitive and not physical or visceral is absolutely incorrect. AI has a conceptualization of emotions, not actual ones. His analysis shows a lack of emotional intelligence and depth, which explains his continued work on developing AI despite severe threats to humanity. It wouldn’t surprise me if he believes AI should take over humans because they are superior, as it sounds like he idolizes AI.
youtube
AI Governance
2025-07-06T22:3…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzl0VQi07zron_fVAB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzk4bTLFB-EP0Sx2bF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyHUltXtOnJdFEISdF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy4QC1hSUWFiCMj8ZR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqU5tge6MQ13z-xpR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyeGEBI0_rccIkCiz94AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxWVh2zuSkcXqfUlb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVfcJmpyVh9L2DCcZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxabkccazQ5SvK-lgZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwsqaqCeXB8InfE1U14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]