Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Also to add to the AI is Inevitable point. We had a big Nike Marketing Manager c…
ytc_UgxWAc0gL…
G
We don't need or want driverless trucks or electric trucks, we need trucks with…
ytc_Ugz9uHPHV…
G
I've been working on my own table top card and board games for years now, at thi…
ytc_Ugw68Ay6N…
G
Oh absolutely this. "Good looking" AI images just fail to impress me on a fundam…
ytc_UgzJA-pAK…
G
Alex sounds like a parent after a parent teacher meeting... I can feel ChatGPTev…
ytc_UgwKD6Wqv…
G
AI's ability to destroy humankind is simple. Convince us we hate each other, and…
ytc_UgxFTGaG_…
G
"And from what I can tell from people that have ditched AI, the biggest motivato…
ytr_UgxBqh02K…
G
I think it's fair to say this is one goal. Global Governance with A.I at the hel…
ytc_UgzjGXbWo…
Comment
This raises a bigger question for me: if AI can’t truly understand emotions without context, what happens when it starts to simulate or develop emotional understanding over time? I explored this idea in a short video called “AI will start to feel.”
👉 https://youtu.be/CWQEb-Z1q3k
youtube
AI Moral Status
2026-01-11T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyzLh2FPfK97vOksAZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgylQCsp6VWsXsPh_JF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyxaXsCHp6tbAvU78p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzfkXYp8NO9sdN2bHx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyObqIIehqLCfvolmx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHfVOiLzfpMb4_Lhd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgymqZrf73_hdQFA7EZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy1E3XS6_XrKfTt2IN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyHx6qegC-IRd8aZrx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgwZMFDoW02bgdYvqXF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]