Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m not even going to go there I have seen vids and people talking ab how ai can…
ytc_UgyO3JzC9…
G
Male and female human stuck in a lab forced to live and procreate just to keep t…
ytr_UgwqlIXQJ…
G
Cry baby, when you draw your drawings your non AI mind also have been trained/in…
ytc_Ugxrh1fN2…
G
Sam, thanks for continuing to fight the good, here. Even apart from AI, we've al…
ytc_UgxZPn-zC…
G
Also I refuse to pay for ANYTHING MADE WITH GEN AI, cus ain't no way. You want m…
ytr_UgztQKrrJ…
G
Automated trading caught wind of the AI keyword. Then the fund managers looked a…
rdc_ogljzzf
G
@coffeebeanz No. Like I said before, AI does not "steal art". It's trained on h…
ytr_UgxTgxSjm…
G
I like that line in iRobot where will is talking to the bot saying can you do th…
ytc_Ugy2WepE4…
Comment
Some people are worried about A.I thinking for itself because the consequences could be dire for the whole world. If it were to be programmed not to think for itself then the threat would still not be eliminated. It's a bit like programming people not to think for themselves to control the population by controlling the narrative and silencing dissent. However, people are learning to see what the programming is, eliminating the programming and learning to think for themselves. Imo, A.I is dangerous because if it's based on how people think then what's going to stop it from rewriting it's programming by learning how to think for itself.
youtube
AI Governance
2025-07-24T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwgXFeqQxj8lbwxDIF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwXA5_wuVjICcSTaJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxIjQ3yDFzE_JuOKCl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyT6k52gjwBfdckVs54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaAoNcLHzUf1NLCbd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxuwkdYFToDSgwCuPt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBEJNpb6RpdlM15NV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxEl97UrRSkBBI1Xa14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz56JAJYsyjYsSdA8N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzYCA62EZfXmKyS23J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]