Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My initial reaction to this video is that people don't know the difference betwe…
ytc_UgwqOthTA…
G
For a person of average intelligence, AI junk is easy to spot, the question is, …
ytc_UgwSpvRw5…
G
I don't trust that AI will be the future. Not that they are bad per se, in some …
ytc_Ugw-rQ0Kz…
G
There is two ways in my opinion
Teach people how to use ai as a talking diction…
ytc_UgzHDNFVS…
G
I wish AI was a benefactor to humans... I hope it's not cold pure calculation fo…
ytc_UgzMkR4xS…
G
I don't think I'd like anyone (AI or not) continually complimenting me and makin…
ytc_Ugy5RQExd…
G
The problem with AI is that it doesn’t really understand anything. All it does i…
ytc_UgyF4qM3r…
G
Invece di fare i robot date da mangiare hai poveri e aiutare con questi soldi le…
ytc_UgwIfkXSG…
Comment
Digital computers have been faster and smarter than humans in technical disciplines for many decades. We have been grappling with the problems the cause quite successfully. AI will significantly expand those capabilities and problems, but still in the same technology space. The real danger in the human condition, the cause of war and suffering in history, is our emotional intelligence. Digital computers will never have emotional intelligence, AI or not. So digital computers will never be totally smarter than humans, nor will they ever exceed our capacity to do harm.
youtube
AI Responsibility
2025-12-19T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxNqbyIl7M4bQEgaIR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz50j53w51HNfgjGDJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxkAENTzKERlhCB0854AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy_YUljBu2eXWNdgM54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3bcKLUZ_-1hFZ5Yh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyL4o5yAXpCoN0HjMN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzeXwT00RfpAoslDod4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxiSqNjho88B-HHWAp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGY8thiQWtQyo7cGp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyT9P4d5S5HBM85GAx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]