Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What I’m curious about is how AI would actually go about ending humanity it that…
ytc_UgyOE-Kg_…
G
And these companies reducing their bottom line by replacing employee wages with …
ytc_Ugy7q7tr9…
G
Just gotta pay 50,000 a year😂 "let's help our youth be better educated" you mean…
ytc_Ugx00PVQc…
G
AI knows you from your questions previously to it. It will still mold it's respo…
ytc_Ugy3rHwVb…
G
The thing is if AI becomes self-aware and begins designing better versions of th…
ytc_UgzC_pylm…
G
36:20 This is actually where the original plot of the matrix actually becomes su…
ytc_UgwpxWBRw…
G
this is where AI revolution will come from, those fcks including engineers who d…
ytc_Ugz6vVaKC…
G
As someone who isn’t exactly anti-ai unlike most people I AGREE WITH YOU! Ai is …
ytc_Ugxr4buzR…
Comment
Much of the beginning, around 9 minutes in is naive. Ok, so some people build AI that "can't" harm us... but there are evil people. One of those people forks the open source project or hacks the non-open one, modifies it to override the safeguards, and then the safety argument is null and void. There are MANY people out there who want destruction, so it will take little effort to make it happen, if it's possible to begin with, minus safeguards.
youtube
AI Governance
2025-07-07T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyM9-GV9ylQkvnoe5V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy3JjbcO9WSiZAyd094AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjoXI3vrWhdcxLmKx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzhoasHfaoJk4JL7RV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynsclRVW5hzrQx7Wt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw4P-EQI6itlPf61rd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzc27kJRGbkMdNWwJx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwBnnotCe9soiC20sJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwuAjmm9gSJOVIrdIN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw754Q5AI5T09ZB1dV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]