Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They are completely different situations. The only thing they have in common is …
rdc_cfkwwp1
G
The officer sounded like Ricky from trailer park boys when he was trying to expl…
ytc_UgyOdBZlM…
G
Not sure if that was the case here but you can prepare chatgpt to behave a certa…
ytc_UgxPSMLoP…
G
AI is not replacing any jobs. Tech companies are firing US workers in order to h…
ytc_Ugx-vZ5Ta…
G
I completely agree! The potential for AI, like Sophia, to enhance our lives in v…
ytr_Ugy3e7N9a…
G
Don't forget that the "AI" we have right now is NOT AI, it is a program. Its a r…
ytc_Ugwl7EKuv…
G
She does not have arms, but she has breast to make up for them. 😂😂😂…
ytc_UgwoLsszB…
G
97% of people have absolutely no idea what A.I. and A.G.I. is going to do over t…
ytc_Ugwy_MY3l…
Comment
i don’t understand why the AI would want to take over or destabilize human society. we can kinda see that people generally get better at making moral judgements and listening to (and controlling) their empathy. would it not track that a general super-intelligent agent would be ethical and moral in the extreme? better yet, if it develops a consistent and comprehensive moral code that differs from ours, wouldnt it be more likely that the bot is right, not us?
youtube
AI Governance
2025-09-05T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzlaNJXNgl9y1J1k3p4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy8m0V3FXM1DHMrwAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEv4wQfxwYvmPL79V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-yx3VnzORfojwzcp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbdVIHUkzpPHJMKkl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyiAXRF4th57ycft954AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw7J_NiGVanegZI5s14AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"mixed"},
{"id":"ytc_UgzukDpM8gsDFEH2YIt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwVJuGQ12skD0SrjQl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzK-venw54vX2TG9l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]