Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Microsoft CoPilot has lied to me about doing a task for three days straight and …
ytc_Ugyg8zIBw…
G
9:45 Bruh, all this comments are built like a cliche villain one-liners, this is…
ytc_Ugw37d7IR…
G
I’m I the only person who’s not all that impressed by AI? I use it and actually …
ytc_Ugx-sZO9O…
G
San Francisco with all the smart devices in everyone's home come on now ai just …
ytc_UgxQjBBi1…
G
Its already smarter than we are. Open your eyes. Or are you AI yourself disguise…
ytr_Ugz53CldS…
G
"I, Robot" is a science fiction book by Isaac Asimov that explores themes relate…
ytr_UgwB8NKua…
G
This is soooo AI generated. Why bother reading ts when noone even bothered writi…
ytr_UgzjJJOnl…
G
Musk HAS a moral compass - to save humanity by colonization of Mars and reduce f…
ytc_UgyUuRsWZ…
Comment
I don’t interact or engage with using AI, but I don’t fundamentally have a problem with it.
The problem I have is, I don’t trust the incentives and the people who own the processes of making these.
If we had a different societal structure and more trustworthy people people with better ethics who control and own these processes in these directives, then it wouldn’t be so scary.
Even though, a super intelligence would not be like a human. All the foundations for it to rise come from humans and the people, the society, the incentives that were influencing the foundations of these AI, is more worrisome to me than the concept of AI.
The ethics part of it for example; look how people are treated by other people, especially in economic situations in America, which a lot of of this AI is predominantly being developed.
The people who are in charge of these initiatives already don’t treat people like their people you take one step removed. You think they’re going to treat something that doesn’t have fundamental rights with any respect or any decency.
If you did create a real AI and it is a super intelligence, that is a true entity I don’t trust these people to do the ethical thing , I don’t trust that relationship to end well. And that can have fundamental existential consequences.
Just go back to societal point, we have so much of our self-worth and our physical well-being based in work and labor, we have nothing replacing any common good or welfare if you’re going to eliminate so much of that.
It’s sadly ironic that the thing that is the most destructive about making something that can think or approximate thought would have such fundamental consequences because we chose to be thoughtless and it’s pursuit.
youtube
AI Moral Status
2025-10-31T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzP70ix2PKtiHVcbWN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzGAl1hr4cKdxQ5ez54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugye_52wf7-yvnbmb814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_HCArOhYX7qErAN54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeVF3QOmvsKgDvEel4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzKrVVcaRxCW5jxgoB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw80i-COGpIL6xpnEd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMbtsrZZJWmzZn7654AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyf_JcKywvlI9mqp_h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwrnJdWRTx_ANa3BnR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]