Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI would be used to make everyone rich and healthy it would be good. But it …
ytc_UgwDfnHtS…
G
@TH3R-_-GOODname Thanks for commenting! That video was epic, but I can't help bu…
ytr_Ugzve-GCV…
G
Pls stop-- has NOBODY seen literally every single robot horror movie (that can i…
ytc_UgyPcoKvA…
G
You're still grinding at argue about no code and code tool but fun fact people l…
ytr_UgwO7rhLW…
G
For the future generations that watches AI videos everyday, would they even know…
ytc_UgwfLAdnM…
G
YES! This is so true!🙏
BUT REALLY, in all cases you should talk to the AI like i…
ytc_UgwSk_LOb…
G
Agree.
Op makes the mistake of thinking that the AI we have today, is similar t…
rdc_g0y7v05
G
I live to work because I enjoy what I do, but if I had to guess why it's not wor…
rdc_dv0y6dh
Comment
I don't believe it. Digital computers have been faster and smarter than humans in technical disciplines like math, logic problems, data analysis, etc. for decades. likewise we have been grappling with the problems it creates like bank fraud, theft, ruining reputations, weaponizing for war, etc. AI will increase the capabilities and associated problems significantly but in the same space. More challenging but nothing new and our track record proving we are able to handle it is excellent. The key point here is the real dangerous part of the human condition is emotional intelligence, not technical intelligence. Digital computers do not and never will have emotional intelligence whether they run AI algorithms or not. I'm talking about the desire and ability to manipulate, motivate, deceive, or obtain power. That's the part of our conscience that has been the cause of all war and suffering in history. Computers will never overtake humans in that regard.
youtube
Cross-Cultural
2025-12-19T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzNanwxKvr6SC5jjpF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdDgxfpblCdsBuixB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzxIP0B3IikW-gRrN54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxiQR75jYl3whVFNAV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRlzu5yCdkE3suKpt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzchm61pYTnKpprcvN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJdX69KOMGCTB3dix4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6YgprDhEu47ZRz3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyjBJ8s5A3utn1vxzV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgziLpUv-e2DNwqo93h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]