Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't use Chat GPT and I use old fashion spell check, visual editing and oral …
ytc_UgxQpiOCW…
G
There is a fallacy in this logic. If a businessman is introducing AI in his comp…
ytc_UgyYELmcc…
G
Hey CJ, I’m a college student from Indonesia, and I feel like many college gradu…
ytc_UgwwLyWiB…
G
Here me out this is a good thing we all won't just starve you know we will evolv…
ytc_UgzD9Y6zF…
G
So we should take out those capable of building AI to save the rest of humanity …
ytc_UgwkDoaad…
G
i have to say it, it is not ai art. it is not art. people who say “i make ai art…
ytc_Ugyy9zB-P…
G
Not good Roboter at all😂😂😂 looking righg into the cam. And their skins have a gr…
ytc_UgwXmhBkH…
G
I totally agree with the thing of liking art more if you like the person. In a s…
ytc_Ugx6JoUTc…
Comment
The gorilla problem is a heuristic, not an empirical prediction. It highlights risks arising from capability asymmetry, but it does not establish inevitability, nor does it map cleanly onto human–AI relations, which are socio-technical, governed, and intentionally designed.
youtube
AI Governance
2025-12-21T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyhBf5BVlDHodn84gV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxw7X0O4vKA7JI3SF94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhYT4noHd8zKyNrOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwsLunRNFOTj6OGb3B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzQijXxkX07iDGOZF54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8SE2jC-VZ4dMJWUx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxQsVeNqyJHC2iLYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgykJe5oSUrOeRbU2lB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw-7SMUrvnTJO_O_jx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzCQK8OWJjed3abHA94AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"approval"}
]