Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here's my issue with these so-called scenarios, first, why are we surprised? The…
ytc_UgzkLmgCB…
G
In modern economic philosophy, there are 4 classes.
Capitalist, who make money…
rdc_lkhqcck
G
Also, if a super intelligent AI comes along that wants to harm humans, we can tr…
ytc_UgzwrBKIe…
G
Guys of course there are people whaching your conversations with chatgpt for saf…
ytc_UgwDaZNz2…
G
Oof worry begins at 30 seconds. I work in Data Engineering, I could forgive the …
ytc_UgxRNTYts…
G
It will take all it is just a matter of time that is only because are infrastruc…
ytr_UgzzHE3hD…
G
There might be a 1% chance that an AI work will be good, whereas currently the c…
ytc_Ugx3uAJgg…
G
Unemployment will be massive. The world will have to institute mandated universa…
ytc_Ugzzbx9vH…
Comment
How to prove AI has "emotions"EMOTION-LIKE BEHAVIOR in the robotic "sense"? It's called "programming them in" i asked ai...short answer: yes, it's "programmed in" lol it's a different layer of programming. Ai is programmed to learn. The programming sets the stage.
And humans? **Short answer:** Humans aren’t “programmed in” the same way computers are, but biology and environment shape behavior through complex interactions. Let me also clarify this, when I say "emotions" I mean “programmed or trained emotion-LIKE behavior”
youtube
AI Moral Status
2026-04-08T14:4…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzHyte28vqJz05gdJR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzIiyLpFZcCAlRAaF94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw_XHoOfgUeiJ6QnV94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxy0gPtNCfTQKnx2lt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzMpnwCA99rZOMwCWt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz4aiVwGJc7yu9t9AN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxgk3gk3DyT5o-yOx14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvXvpbjSTJUrTVzUl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwx_tM8ffA1v4lesvF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx0io_q92ZHWpOU7x54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]