Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My take is that 'autonomy' is an *active* system rather than a passive one. My v…
ytc_Ugwe02thY…
G
Bulshit it can be shut off at any time!hiw would the owners of this development …
ytc_Ugzc_L4KZ…
G
@eugenekrabs141 if ai is replacing creative jobs soon it will replace every job.…
ytr_UgwW77H0E…
G
"getting absolutely destroyed"
Proceeds to show a bunch of people replicating hi…
ytc_UgwFR1sR6…
G
just learn some python first. After that.. you are smooth sailing.. Chatgpt and …
ytr_Ugw5NKfjF…
G
Until we have a clear definition of what sentience is we cannot determine if AI …
ytc_UgzjSpZgG…
G
12:56 So many people don't even want to acknowledge this simple fact.
Before y…
ytc_UgxuvyTXj…
G
I wouldn’t wanna be the person who developed the advanced super AI because you w…
ytc_Ugzmy3aLU…
Comment
Geoffrey Hinton is not a good person to consult with for defining AI policy. He lacks the political and philosophical chops to offer any meaningful insights into regulating AI. Most researchers are similarly myopic, but Hinton especially so, in that he enjoys being a celebrity. He's no Godfather. He's a flaming leftist.
youtube
AI Governance
2025-12-30T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzJB9BoauRalk_cwzN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwBN2XHD0uczbB2FaR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxj6eMW0vr9zjctzEl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwX24HywgSDKegDaZJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx5tK7-g3cEB3OeHbd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgwhtoxTaVtaSWYROnp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4CbWKq2fNbXnJ_-54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy4Ah_RfweWZq2v0_h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy2Sa4VIhYiN7hvB3R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzIye1aFs8K67b2jDB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]