Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
say we find a way to make ai sentient, why, just why? it's fine the way it is. w…
ytc_UgzDzLQCH…
G
Because AI is principally incapable to provide reliable estimates of the error v…
ytc_UgwkPBA86…
G
Counter-point: I watched a 1.5 hr AI anime about about a guy who got sent back i…
ytc_Ugz04hjoF…
G
The whole "born special" argument is just devaluing the hard work and effort it …
ytc_UgzngNFaX…
G
These companies are lying about AI. They are not making a profit and can't admit…
ytc_Ugxm6MyEs…
G
@CC-ce6ngIf you think companies invest their money into AI only for data mining …
ytr_UgyjnIAIZ…
G
Everything looks like AI to me. Not happy. AI to me is the same as watching cart…
ytc_Ugxr1Pa3J…
G
In the video they told him he wasn't allowed to tell them to settle down because…
ytc_UgxL61qhS…
Comment
I disagree. I think AI is inefficient and unsustainable as it consumes too much energy. It's also unreliable as it is prone to bias and malfunction and cyber attacks. AI doesn't have the consciousness and human values, to make real life decisions/judgment calls for us as humans. This will severely limit AI use. It's unrealistic to think that you can replace humans just like that.
youtube
AI Governance
2025-07-21T12:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzZmZVLzPKSHYdj_3B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxsP8V7xEzbGI4oNAd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzrDd6MA9XD5JxfG6d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyItGTyvstz8J-A74l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgytDp18mJgjbSCbg2V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxLOXl7cIrlRzX1Z614AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzRgDTReIsEpa5ujxt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx9omvue6fyzc_TMoZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"frustration"},
{"id":"ytc_UgwE2i83MncddjfncSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwTNefw0msQmBaJDz94AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]