Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I may not be with sag I got something to say I'm an extra I do not approve using…
ytc_Ugw_8DsWJ…
G
41:37 - Automating jobs increases total societal wealth. We were less wealthy w…
ytc_Ugw00kjkZ…
G
Ai is literally like a genie lmfao. How u ask greatly affects what u get…
ytc_UgzJoUB3G…
G
sadly you can never be completely safe from Ai, but don't let it discourage you …
ytr_UgwdYQjul…
G
@chezburgerE ok then.....if i have to start all the way back at factory workers…
ytr_UgyRO4ANd…
G
Harari makes such an important point about AI as a “new kind of intelligence.”
I…
ytc_UgziLvuzp…
G
Okay this might be a bit controversial but I personally think they're gonna take…
ytc_Ugyh-wXQe…
G
Could one not ask AI what will happen; it's prediction of what will happen to th…
ytc_UgxujWuw9…
Comment
If man is to ever indirectly interact with an extra-terrestrial intelligence, albeit physically only through the use of robust robotics, then the relevant AI will most certainly be required to "think". The mission distance is so vast, and the time required for mission completion is so long, that the artificial Intelligence must fully prioritize addressing the perpetual information decay of mission critical objectives due to ceaseless entropic influences, and accordingly, must be able to surveil, acquire, and manipulate resources necessary to complete the mission. Hence the AI must be able to "think" well enough to anticipate, avoid, or recover from inevitable catastrophic events. Further, the AI must attempt indefinite iterations of quantifiable efficiency improvements with the ability to physically implement the associated self-conceived changes over the lifespan of the mission. The incentive to create this level of intelligence will exist if the alien contact mission is expertly conceived of, properly funded, and implemented into action.
youtube
AI Moral Status
2026-02-28T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzvNfmGTfk9RMoa2WF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_Jk6BOY6HqdS_T_l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwN9JezN0gL0TckRQJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxiWkumVpq70Yo-ykR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxec6Mqgd66Gnzx_wx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxEP63A5LWnTTeudGl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxMFvJ2fJ__uBdRJXh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZEXKz6psicE2lRLR4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8L8Agfw6lKm2GE614AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxLp1AzuvM9wXSZ2gh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}
]