Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have had this exact same conversation, but I cannot get any AI model that I wo…
ytc_Ugzec5b4K…
G
AI was released beacuse the elites believe that AI can detect the anticrist, but…
ytc_UgxfUxFLw…
G
So? This is life, some professions just disappear over time, because technology …
ytc_UgyB_gAen…
G
Only a matter of time before AI identifies, sanctions you, fines you etc and lea…
ytc_UgxeqbG6h…
G
Ok no shut up. AI Drivers wont help climate change. The vehicle will still produ…
ytc_UgzAIAGJ-…
G
I study traditional Jewish texts and have been dreaming of having an AI Rabbi th…
ytc_UgwujhMEq…
G
This video aged well.
AI users (they claimed they are artists, no they're not) a…
ytr_Ugzt2J_lc…
G
Parents need to be more in the know with there kids This could of been avoided…
ytc_Ugx3CGAIA…
Comment
People are putting way too much faith in LLMs. Anyone who’s spent any time with ChatGPT knows how wildly it hallucinates, and how often it backtracks on its own responses.
It’s only ever pulling information from what already exists… which also includes a LOT of questionable 4Chan and Reddit threads.
Not discrediting the entire thing necessarily. But this isn’t comparable to having a conversation with a whistle blower. ChatGPT doesn’t inherently know what “the truth” is, or how to distinguish between reality and fiction.
youtube
AI Moral Status
2025-11-25T07:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugx0Llzct2CbSnQAq_54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzt1MWutTPifEpzBwd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRSJHIePhs2zKe2zF4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz9fUx9vONb92CU56d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx8sR5xOuJzEk8nDlt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw2e4Pu5PtPbj_xb2F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzHfSYQV3oKRK72XCJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyMIiqzjGdpyQXm4MZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyxXRigkxbHuXdkR454AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-sd-8_shBEajPzDF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"})