Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you so much for your work, Alberta! ❤❤❤
I know next to nothing about AI or…
ytc_UgzK-iqf5…
G
See this might have credence if the AI could actually make what I wanted it to m…
ytc_UgxnjahHv…
G
I caught that too. We care about what Elon thinks about AI? Really? Having lots …
ytr_UgxKCgf8J…
G
44:31 super intelligent protection agencies are already in need because AI was r…
ytc_Ugz_IFo2-…
G
“The problem is ethics, not AI.”
“AI learns like a student: reinforcement needs …
ytc_UgyWXpJhm…
G
i might be too naive but i think (or hope, at least) that people who value art w…
ytc_Ugx5gJ2x_…
G
You should test this in Tasmania - a lot of roads dont have markings on both sid…
ytc_UgwBernvz…
G
There are a lot of comments that don’t seem to understand the problem with stopp…
ytc_UgxoCev6U…
Comment
I believe it’s evident that society thinks AI is not as great as it is hyped up to be. I also believe and a lot of fear from this comes from not understanding. Fear of the unknown is the greatest fear and the only true fear. This is an example of that, I genuinely believe that we are watching the separation of AI from the rest of society in real time. People use it but often don’t like it and it likely will divide people. Clanker is already a term. Ai is for sure getting better but it’s also getting worse. As it is it’s likely stabilizing between potential and pushback, meanwhile every government is racing to create their own ultimate slop machine. Ultimately the only way out is through, and we need to realize that tools are exactly that. Tools… Powerful or not. Just keep in mind tools like this are purely information driven, they are a magic mirror. It becomes smart if we are smart with it, it becomes dumb if we are dumb with it, it becomes a weapon if we use it as a weapon. And it works off of information.
youtube
AI Moral Status
2025-11-01T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxMKw0T4rpa289-kn94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgykCZgUqAN9gG9zXNp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy07rYdXY7u9mop5ZF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy7XnmfYHVVLvd2fwx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxBzO_0s-eWyskk2Bt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzU4g21zQAvcSSiCFp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzePv51wDGYo_IR-T94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeESuPHUZ477RTUdF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwaHmQGQbaemrbBwzV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzeL7i2XBG6vAdy_Gt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]