Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The AI knows the darkest part of mankind when we use it in those AI character we…
ytc_UgzyOrgp9…
G
I am right. I was sure of an increase in software roles and reduction of non tec…
ytc_UgzZUqKlR…
G
So u have facial recognition that racist but it racist to ask for I'd when peopl…
ytc_UgxfrSALO…
G
The problem is not AI itself but the people behind that either to help humanity …
ytc_UgxtnrTPE…
G
one of my family member told me a story of one of their friends. they lived some…
ytc_Ugzilv11t…
G
I am not an artist and I understand you parts of admiring art is understanding t…
ytc_UgyUx9l6G…
G
I’m autistic and I have terrible hand motor skills so I’m terrible with my drawi…
ytc_Ugxq1PBcW…
G
You see how her drawing is so accurate , but in ai u can’t even recognise the pe…
ytc_UgzpDrFr3…
Comment
What I'd like to know is why do we assume super-intelligent AI will be malevolent? What would its motivation be? I mean, we generally don't bother monkeys unless we're competing for resources, and the more responsible among us fight to protect them. Why wouldn't super AI do the same? And also, there's so much science doesn't know about the nature of the reality in which we exist. If super-AI cracks those questions, like manipulating space and time for example, I would imagine many of the issues that concern us today would become trivial, no? It could terraform Mars easily, and fix earth, so resources wouldn't be an issue? Does super-AI necessarily have to be Skynet in the Terminator movies, viewing us as a threat and attacking us with killer bots? To me that sounds like us seriously thinking the monkeys are going to do a Planet of the apes, so we take military action against orangutans and stuff.
youtube
AI Governance
2025-09-04T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwRTfigiEvhPnxfrWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwjof6X6rly-jyDu5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxrNtEX7IwvlEi70u94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwo_eFMIEWRWF2ZWHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwsGL64utAjMHOUo5N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz0DApajVA2-wCJjeF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx24BavUjutFF3Fsqh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7fn9uMd0j-kDXmNR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlpT1RGYiJxJOpaDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxEEwcvyWc4aKqcsMB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]