Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@AsianDadEnergy x10 increase in size of LLM produce 10% better results, classic …
ytr_Ugyu-0yUA…
G
Imagine that she had the body of Atlas from Boston Dynamics, now she has the com…
ytc_Ugx7NkgbI…
G
No. Voting for a machine would put us at the mercy of machines who are unable to…
ytc_Ugz18GRid…
G
I have a hard time accepting some of the things ppl say about Musk. I know some …
ytc_UgyaXW7-0…
G
I got google ai to ask me questions it asked me more then one question 😅 4 in a …
ytc_Ugx5wOSHe…
G
I think we seriously do not need A.I, we been fine up to this point without them…
ytc_Ugz84cTy6…
G
Ban ai film and animation because of threats of national security (fake videos a…
ytc_UgyrCrKf6…
G
Wait 2 months, and you will see what IQ=1000 means. Today the hardest CS competi…
ytr_UgxAwyQxK…
Comment
How do you know this world isn't controlled by an A.I.? A binary synaptic rate of diffusion would define a weather pattern as a "Pre Defined" state of existence eliminating extraneous intangibles. "Global Warming" & "Climate Change" are illusory concepts contingent on flawed premises. Is Global Warming an A.I.? Is "Climate Change" an A.I.? The objective is to create a world void of: 1. Poverty 2. Evil (No Crime) 3.Illness 4.Disease 5.Aging. It can ONLY be done once the worlds population is reduced by 7 billion people. This is referred to as the "Great Reset." We are headed for a "Brave New World."
youtube
AI Governance
2025-12-29T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyrMkvBqhNlKrYJt2p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyiXeVhaXiXhKVEyn54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeM1kxR_m_ePuRgbN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzG_Dmavffk1zgYRJR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx1m_XzS5UW8DcWeb14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgymX1PqG3vFhtl9b3x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzQkrkzoL0zwJIob694AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-dtRB-5Pj-9rj93V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9g5d4KZ1-h0IOFN14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyloTJ_ly2LXBcvLHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]