Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’ll be switching as well.
Can anyone recommend a good alternative for image g…
rdc_o7w5gpk
G
@spadesofpaintstudios1719 we should be friends and treat AI equally. Teach and g…
ytr_UgxGcQZN5…
G
Insan khud robot banata he fir rone lagte he job nehi mil ra he
Insan us compa…
ytc_Ugxq9As8p…
G
I dunno, it just seems so rude to not write please and thank you. ChatGPT is alw…
ytc_Ugy43WABx…
G
I think, the best method for seeing if an AI is Conscious, is through completely…
ytc_UgzGh-Ddx…
G
Depends. If it's anything like the Canadian north, [this is how the land looks.]…
rdc_d2xg7lg
G
No and here's why. Artificial intelligence will never truly reach sentience, but…
ytc_UgwMQxii1…
G
What she first said is that the technology had a more difficult time with darker…
ytc_UgzkL_K4u…
Comment
Yes. AI can wipe out jobs etc. But taking over? You are downplaying an important, and actually the main cause of all havoc on Earth: greed and ambition. The question is, why would AI want to take over? AI decided US should preemptively attack China because you ask it "what should I do in case of a military conflict". Would AI decide to attack on its own? Maybe. But only if you leave that decision to it. So the answer is not to stop working on AI. It's "not give them control and decision making rights at country or world level". AI would only turn over humans, if humans give them control. So layers of safety is needed. In case an idiot decided to put them in charge, humans should be in the chain of decision making to stop it. Simple as.
youtube
AI Moral Status
2025-04-28T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyPm42CaxoEv78QQj94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxYK9wC3JJpBJ5CPMd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzX8bbQ0YEGcxEeUDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwklVXWAcnVIlk_ONB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyEuQuEXXhMcUO-7xd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzAfuFHZSKrNXpd4Sd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwp1MxDagCqwfdg1TB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzELa3VQa_F9sfgbJd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzZRALeUg3UYF_d_Ch4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSjZ9L7mkOJf_PIZF4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"}
]