Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is an integral part of the development of our future as an Idiocracy. Becaus…
ytc_UgwyWrdTL…
G
The people who built the machine and it’s algorithm are racist, maybe not delibe…
ytc_Ugzzhu128…
G
Very clear and great analysis. AI is the bastard child of capitalism and corpora…
ytc_Ugy9Y0Y4q…
G
I think those who go after artists for not liking AI do so because they know wha…
ytc_UgyZptrD9…
G
Am i apposed to be afraid of AI or the idiots and the fake videos people are bel…
ytc_UgywqwZdU…
G
Is better to sit in Robotaxi cool and dry to turn around or wait out in the hot …
ytc_UgxNNjYqA…
G
my ai chats ANYWHERE consist of me being aggressive with the ai, being kind to i…
ytc_UgwN8NKRd…
G
AI was always going to be superior to the individual and superior to the collect…
ytc_UgwRSXTNt…
Comment
Had people argue about AI being safe and don't believe the movies etc before in comment sections, it's just mind blowing how people can't identify the dangers of AI. It requires such minimal critical thinking to figure out some of the dangers.
Humans have proved throughout history that any large breakthrough that can be used to profit, for war or to just get ahead of others in some way will be utilised often in malicious ways. As the Geoffrey said, having AI in the hands of massive companies isn't a good thing and should actually be terrifying.
youtube
AI Governance
2025-06-26T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyzMO6Yav3xEoh8Y754AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwuXjB61tiFqDj-cvB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBg3tYIm4IP9olICN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyu1ZKTpimcgSE82iV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxTtYMkkDAgRQEyYeJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdYx05LDk7Ut9RukV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwer29hRPUpLEhIyFx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwp0Sxv_mB55NaANXB4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxV5zprxMKIKoVv1rh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwbw9ms4SgCmlVuuEB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]