Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
as a derivative of human beings' work, AI outcome has to feed and develop the hu…
ytc_UgwjIOro2…
G
I think I personally agree more with points in the first video (AI art is a new …
ytc_UgxDc39TO…
G
OMFG. Dude, you had me until you went, “it’s ok when video editors or photo edit…
ytc_Ugxmykftn…
G
Things plugged in to a computer with a human controlling it. Everything it says…
ytc_Ugw4yRflB…
G
Humans have intuition and empathy. This is very important in healthcare. I Don't…
ytc_Ugycd6-10…
G
At one point, I believe it was Mitchell but might have been LeCun, said that cor…
ytr_UgzA8QT36…
G
I'm not an expert, but something about this feels worth pausing on.
The tension …
rdc_ohx6xm8
G
Obvious dilemma about controlling AI and thinking outside the human mental box, …
ytc_UgzhfMP98…
Comment
But like for AI to want to get rid of us one day , doesn't it need to have human ambitions? Like this is the main difference between us and AI ,we are flawed and AI isn't . We are the only creatures in the world who think it's intelligent to start world wars just cos of big egos and inability to communicate and have humility.
AI never was a threat,or ever would be - Humans on the other side are the real threat imho
youtube
AI Governance
2025-06-16T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy4OJP2NOm3_sarDfh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWd0rg3-zoWJZvr7Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyBhKrWFamevo5NCeR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9VSD53QvbafPpjyV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyL7p_iREOlL9msr-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzYD_YFV2sPS3kzh-F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5prgKMY7GqOPk-Ox4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzM7xhrfDcAFz1Yimt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxyCB8ZnDkvDOE36ml4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzaQmSeZuXsHcJ46RR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]