Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How are these fossils going to regulate AI when they don't even know the differe…
ytc_UgxoVIsuO…
G
You could hear his point about how people 5 years ago would have predicted all s…
ytr_UgxsN7wp7…
G
I always say "thanks" and "please" to my AI. Because proper manners isn't limite…
ytc_Ugx1dfdvL…
G
@almond5284 he equates using ai whatsoever as losing your humanity and says that…
ytr_UgyEIqIy7…
G
ai is the only thing that has ever legitimately given me suicidal thoughts, i do…
ytc_UgxVzu3Uc…
G
Honestly, these people who argue about Lavendertowne using the blur tool and com…
ytc_UgzFfZT1W…
G
We're glad you enjoyed the video! Remember, on the AITube channel for subscriber…
ytr_Ugzr9b0Xc…
G
I don't use chatGPT, but I'd do the same just for the hope to have a machine-fri…
ytc_Ugwh-jFy8…
Comment
Chatgpt spits out stuff that has already happened. The fact that it's repeating 1 child policy and saying it will go to war is because these are things that have already happened. It isn't coming up with new creative ideas because it doesn't have any. I'd be more concerned if it says that it would enforce rules through new persuasive propaganda techniques that can be easily done through AI.
youtube
AI Moral Status
2023-05-23T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwGgVg3UPVsumcfOHF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyq3T2KEarVHr08slJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwdRBCZTk4oCHwKy5h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugx-sou7rTCSbMf3mft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgwJMtf2AeA1jpGwCeN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwnWpcfLpN0O7Wt69l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxh90Y7yVu-Kjz7Odt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxnH7JoLdbcJydMuHh4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},{"id":"ytc_UgyYvaKp1fXN8jCwgpZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"fear"},{"id":"ytc_UgyWWXCknD90KIXCfiJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}]