Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I know how to solve this picture number one have two real kids but the cat is ai…
ytc_Ugyv-n3WW…
G
only lame people would follow a fake incluencer smh how gullible do you have to …
ytc_UgyjSYanw…
G
Teach the public the difference between raw art and generated art. There's real…
ytc_UgwpvU5tM…
G
There's a pretty big gap in understanding of how LLM's work here. It's a great v…
ytr_UgwzsvT0R…
G
First off to suggest that a single AI que uses as much energy as 30 houses in a …
ytc_UgzA0TXMh…
G
Apparently AI ChatGPT does get tricked everytime because of its “AI Intellegence…
ytc_UgyGYjLxh…
G
I love the idea that skynet would look at human consciousness and say, "Nah, I d…
ytc_UgzPgE8qf…
G
In some sense, calling their company an AI company is correct. AI- ALL INDIANS C…
ytc_UgzHf8LNE…
Comment
Imagine cheating in a game. You can do anything, endless resources.. thats the point where the challenge ends, the purpose of the game ends, etc.
What if AI would be able to wipe out humanity? Would it still have goals?
In these scenarios, we reason with evil human goals; like getting more powerful, getting lost of people who are in the way of succes, stuff like that.
Would AI reason like that on long term? And what would the ultimate AI goal be anyways?
youtube
Cross-Cultural
2025-10-31T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwDLAlj0el1CihflVd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz-HKLCOjaZvz3T0xx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxuCsIgt-brSN_rqQh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrXCYrAANoCRZJG-R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgziLt9V1_J6hNouxpN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyfECW5P6XU0XXCdzB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUNNZGQtELR479Zw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwbN5Xyx2i2apAl3ad4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx6uIyvft34aNc9qyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXrm4HtHXHg-fWyNp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]