Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think he doesnt understand how hard it is to train and use an ai (not those we…
ytc_UgzY-Cusj…
G
Google gave up on search in the sense that search is *find me a website with inf…
rdc_n3x9zle
G
We can make safe and sustainable AI supported society, but we have to pivot soci…
ytc_UgwQXo713…
G
Y’all are like monkeys looking in a mirror, thinking your reflection is alive be…
ytc_Ugy5C1W4t…
G
Sounds like the best way to alleviate the lack of jobs problem is to burn down t…
ytc_UgxG5ieuZ…
G
Personhood belongs to humans and humans alone.
AI is not a person. Suggesting …
ytc_UgwQhI-5J…
G
My new SEXBOT has a WIFI connection
with AI and can answer every question ,
I …
ytc_UgwAOUEU1…
G
There is a well known story that an AI computer started to cheat against a "Deep…
ytc_UgwCWaq_o…
Comment
You actually disproved your own arguement at 52:56. "If someone told me there was a 1% chance Id die if I got in a car, I wouldnt get in that car". Theres a chance every single time you do get into a car. Sometimes its more than 1%, sometimes its less but every single time, even if its not moving, theres a chance youll pay the ultimate price and youll still do it. Dame with the drink scenario. Its less than 1% but there is a chance every time you consume something theres an issue with it thay will lead to our deaths. But we still impibe them. There is never a zero percent chance what we do is 100% safe.
However with something like A.I. theres more likely scenarios that something catastrophic will happen than not.
youtube
AI Governance
2025-09-08T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyPI0cwjXNF6YmWZRx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugw6u1PL8HM8JjiPJPV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwi-LcJtKN3drWMM_B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwYT2V_V617fmju8154AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyDTz1zemEdx5-PBlx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxMMRqJ3jWGbX6NsH14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzlBG_YTyzju-MYFbV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxI9jZfiwf3BM4rYa14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxmMmGP0qbqMGKDYcV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy6oyThkUxQfOAGkRV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]