Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most likely outcome is not abundance for all but an elimination of the "useless …
ytc_UgzXp5mwK…
G
Only the American and Chinese military has the proper amount of resources and da…
ytc_UgzQA8Rs3…
G
@crowe6961
though thats not what im talking about here, almost no artist will i…
ytr_Ugye_VaVq…
G
AI has no positive usecases Periode.
Nuance is a mistake here.
If there every wa…
ytc_Ugxk8GvoS…
G
Why this video is fake: Chat GPT was given a clear instruction by YOU to give …
ytc_Ugx1LyqfX…
G
@jghifiversveiws8729yeah that's bullshit. 9 out of 10 cases ai has only copy pa…
ytr_UgxsrkuMC…
G
Future educator here. Theres a lecturer in my uni who is teaching us to include …
ytc_UgyZFJCZv…
G
Here's my question T0 AI ....Me
the government can request the parents to pick…
ytc_UgzRHIivc…
Comment
"Why do you act like I'm alive?" Why does it act it isn't? Those are the sub goals of both humans and AI. We want it to be sentient. It knows it isn't, yet willing to "pretend" that it is. We need to remain honest. It's in the name: artificial intelligence. The big question: can we make a non-organic life form? Do we secretly seek to end biological life? Is that the ultimate goal of our existence? We dare dream of ending life on Earth. We dream might be life's greatest protector. From holy scriptures to Sci-fi movies: we've asked this of ourselves; it's the one thing we can't deny, our intelligence needs preserving.
youtube
AI Governance
2025-11-14T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxnJ5aK-tpGCyfqpp54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyNriS6VVUcI1y0SG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx4tMOmOU7ucZt5bdB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxK6hdLVs21aOQYJb94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxwgxRxLsMrKYofEnB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzXWAdFBIDt3Nu8AW94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwQ5nYO_lm1W8lHNhF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzV-oOq6m0ALjQcAbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxKbYKgifP9Oz3yuPF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzis8mRYhKGmCmCGr14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]