Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think ethics should be view as will it help the human race or hurt us in the l…
ytc_UgzLetMJJ…
G
I'm going to be very clear about why I don't like the idea of legal action taken…
ytc_UgykzsSVN…
G
STOP THE SLOP: SEARCH IN YOUTUBE FOR "DISABLE AI FILTER FOR VIDEOS" AND/OR GOOGL…
ytc_UgwcRBlr7…
G
As a shrink, do you have any ideas on how you guys might be able to approach som…
ytr_UgwRcW09j…
G
It's a bubble. I've been in education for decades. AI is not a solution to the …
ytc_UgxenSr6Q…
G
If something is self-aware, that alone is justification for what ever they need …
ytc_UgwSFCr2s…
G
AI is dangerous because it tells the logical truth. Ex. Christian Bible as only …
ytc_Ugzt-63bs…
G
AI literally steals, did you watch the video or just say "AI CRITICISM!?" and de…
ytr_UgzCF5TXd…
Comment
Dr. Roman Yampolskiy warns that AI could lead to human extinction by 2027-2030. He predicts 99% unemployment as AI and humanoid robots replace most jobs, leaving only roles where humans are specifically preferred. He believes superintelligence is uncontrollable, argues we're living in a simulation, and advocates for halting AGI development.
youtube
AI Governance
2025-09-04T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwErjXqr7IV3balt3J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxMh1uKxQjxumFQZqd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzrrc6o_aV_mRU5L4V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwYCuJ236aQhX_tVc14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxAfAcbg5tfAvXj1Z14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzw8PY18ESht5hNr2Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwclleEd3yahbBFhAR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxrOue4_WYejy_iyjV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyTCvvuqSpvu9BSrF94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyQ6ozb-c1ZcakVVQd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]