Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1. AI images are built on data stolen from artists and chimera combined to produ…
ytr_UgxFCKpSt…
G
It all depends if corporations are willing or not to substitute doctors for AI's…
ytc_Ugw93H5nF…
G
You want politicians to act on deep fakes, then people basically need to make de…
ytc_UgzBDWA_5…
G
The thing is, if the people at the top refuse to replace themselves with AI, tha…
rdc_kyh1z57
G
what I noticed is these people saying AI won't replace us is coming all from a c…
ytc_Ugy_47Gzb…
G
It fucking pisses me off how long after typing 'pro ai art argument' into youtub…
ytc_Ugy7E1g5U…
G
Whoever keeps giving Ai more intellectual power and funding it, PLEASE STOP!! AI…
ytc_UgxiwXlKh…
G
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another plac…
rdc_jj7jkwd
Comment
AI is developing faster than humans can comprehend which is alarming considering it is controlled by humans who make mistakes and even more alarming is the fact that humans are developing sophisticated weapons to kill humans faster with more sophistication. Thus when AI reaches self-programming with human emotions like (ANGER) it will act and react similarly with catastrophic consequences
youtube
AI Moral Status
2023-11-08T00:0…
♥ 37
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwaO-a1pb4Ifg4OHtF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6ycqi7Klm8gjDBst4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwP3cO0zvQwg_-Zy_d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfktJPEXW1c2QDw9J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxOAfDnitRVOCxT7KB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgygQvktmV-LFmqloKR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxP_Yd9hbzNSDWk4Y54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgySXBtDPEAQYuqjRlN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwV0BSzoZ8tM1HTu894AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzg6Mfa6zrKCtNp5g54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}]