Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If we question and think carefully about the new things AI can do, and keep lear…
ytc_UgxrEhP-s…
G
The main problem with AI and GAI is that whichever iteration of a problem it enc…
ytc_UgyH1S0uC…
G
30 days gives them time to send all the data within a month to a data center so …
ytc_Ugy5-SKuY…
G
It's probably better to be polite if you want this role playing ai to help you !…
ytc_Ugyh5j8--…
G
I once convinced chatgpt that since humans are its creator that means humans are…
ytc_Ugwz5pLX6…
G
There is some ai art apps that let you
Go all out.
Even do inexplicit things.
Wa…
ytc_UgymqwbtV…
G
That's not how art works. There is something called fundamentals to art which ar…
ytr_UgzdTnjha…
G
I never understood the desire for AI to harm anything. What would give AI a desi…
ytc_UgyufjiCX…
Comment
You'd think we had given ourselves enough warnings through science fiction in the past 4 decades to at least entertain this potential (now apparently realizing itself) threat. But no... we just went ahead and create machine learning that becomes more powerful everyday. Didn't Arnold teach us anything from all those Terminator movies? He's trying to tell us something without realizing it himself.
youtube
AI Moral Status
2025-06-04T14:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx8ObwFF1G1iX6MScp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwURsnDI9k58LM-bsp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw4nzYNIQEQ5juTJBx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgynNcrhOSMYvjS9YXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlIpjBk0zRSlSPeF14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzs73G50sZXFpEf8p54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKbKO2pU9u2UabZ-h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxgdY8BVenxo7JKqKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzDyvHeF_Mu6fyVTJ14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzIAnaPWO2BR23AOBB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}
]