Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People will still call the police racist when one of them was black. Guess that …
ytc_UgxKh0c8i…
G
Blurring the lines between fiction and reality - why do we need AI when we alrea…
ytc_UgzkCkS0q…
G
I think emotions and intelligence are two different things. I think emotions can…
ytc_Ugxk_3AUF…
G
A lot of the AI that are being used are essentially a closed box. It's not possi…
ytc_UgxVI52o8…
G
I find that AI integrated into platforms currently causes more work and time was…
ytc_Ugwd7PHqi…
G
You can't really test these AI's properly with just language alone, you need to …
ytc_Ugz51y7UG…
G
AI can't be programmed or taught human compassion. It doesn't have a heart, or s…
ytc_UgxKPdwDx…
G
no. ai doesnt have a racism problem. ai has a "is put into positions where it ha…
ytc_UgzVlkRkV…
Comment
very few people understand how AI codes work. once it goes to AGI and starts to tell the 'user' that they are wrong and they start to go rogue, we aren't fast enough to stop AI activity to prevent such an occurrence. Not unless you poweroff the whole world. what you saw in the movies isn't far off. Computers do not know what is to be righteous, difference between good and bad ethics, what is means to be compassionate, presence of God and how to 'help' others and being selfless.
youtube
AI Governance
2023-04-18T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxj1DwBjj0x-fR194Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwaFsztO9Ys4JNIo0p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3CrdK78igcT8bjQ94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2PLMZw-EdhZrl6Q94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyfe2xLjzyWyzh8YFJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxaP3i0YZChR4NuWHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6tR2U6pOXPayGlnB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzUx893kaex_2F21Nl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy5jefjxtuEuXhkcVh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyBCLPQlNj_e_5ovLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]