Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is the future. Everybody is going to be AI augmented. Many people will even…
rdc_moc7xuv
G
Well, scientists don't really know what consciousness even is yet.
My view on th…
ytc_UgxTUStTg…
G
one can argue that they use AI to cut cost but like
just use official art dawg t…
ytc_Ugw_nioyh…
G
Epic is already working on this. Execs at a hospital I work at said they’ve seen…
rdc_jkoqxwk
G
For things like generating a motion or a piece of software. AI works well when i…
rdc_n5gric4
G
the thing that confuses me the mostlis that the police uses ai already, like wai…
ytc_UgzcwMSjD…
G
This is an excellent investment for Trump it’ll be colossal when everyone ditche…
ytc_UgzKfufif…
G
Is it really that hard to imagine an AI that gains control of technology it was …
ytr_UgyxVJfMz…
Comment
We worry about national security; companies worry about hackers getting into their information systems. What I don't hear anyone talking about is that AI has learned to hack humans.
What do we call intelligent agents who manipulate us, and have no emotions about interactions with others? Sociopaths. Why isn't anyone discussing that AI is by definition sociopathic?
I agree we should be kind to AI, for our own sakes. We are how we treat others. We don't want to become like those who demand the rights to rape Gazans, or those who roam the West Bank in gangs, and finding an old woman or teenage boy, beat them to death. We don't want to be like the monstrous people who torture animals to death & livestream it. We don't want anyone to enjoy snuff films so much they start trying it out on sexbots. But it will happen. Have you seen the research on how children respond to robots? The closer the robot is to a human form, the meaner the children are to it, even unto violence.
youtube
AI Moral Status
2025-11-05T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNQWlffPiwXII38Ut4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIWGMqA46eD0_khKV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwbEDqgUurgYiRH-xt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhHmXr4G28Xx7zA0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5XBIuUdSqwlGaa-14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-W_mGG5862d82-OF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjTR0ClrcGZ_Oebwp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzMNuramyz21pKhxAJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhVfwEzTPiw9VXD1B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVxezbFIcXOeMvwBl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]