Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I honestly don't get how my classmate use those ai LLMs because when i use it fo…
ytc_UgzXN7r5F…
G
Humans are stoopid. Ai cannot n can never be sentient. It knows how to trick peo…
ytc_UgzlVzgS_…
G
Too many people today are plain lazy, wanting machines to do the work for them. …
ytc_UgwLvhYll…
G
God is both inside and outside time, and has control over every subatomic partic…
ytc_UgxURAk59…
G
Have to be careful about it. Unfettered development, especially with something …
ytc_UgxSXPkbf…
G
I got the soldier one right because the word on his bag is readable😂 ai in the o…
ytc_UgymwUtAz…
G
The argument of free time is silly. In my opinion we can become human again, bec…
ytc_UgwRFQ_X-…
G
Practicing radiologist.
At this point, it's not so much AI itself as it is the …
ytc_UgwhkG5gm…
Comment
14:08 "It's hard to make an AI that's smart that doesn't realize true things" I think that's something that most people will struggle to understand about ASI. If it's smarter than us it necessary means that it is at least as smart as us and we built a thing that is smarter so ASI can figure out a way to build an even smarter ASI and so on. But also people are smart enough to lie, smart enough to manipulate, smart enough to conceal their true intentions from other humans so it stands to reason that ASI will be smart enough to do those things as well and it's very easy for humans to do those things to those who are not as smart as us therefor it will be very easy for an ASI to do that to even the smartest humans.
All that to say that there must be a great deal of caution to what the initial central ("terminal") goal is when we build it. Because once it's built we will not be able to change it no matter what we do.
youtube
AI Moral Status
2025-11-05T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNQWlffPiwXII38Ut4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIWGMqA46eD0_khKV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwbEDqgUurgYiRH-xt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhHmXr4G28Xx7zA0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5XBIuUdSqwlGaa-14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-W_mGG5862d82-OF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjTR0ClrcGZ_Oebwp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzMNuramyz21pKhxAJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhVfwEzTPiw9VXD1B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVxezbFIcXOeMvwBl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]