Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Artificial intelligence is not your friend. It will amount to nothing but lazine…
ytc_UgzfP34RC…
G
College students now using AI to do all their work anyway. Employers could just …
ytc_UgyANR66U…
G
The open AI can be the troubled little brother who can't use the internet and ne…
ytr_UgxkdR5d6…
G
> It is an advanced economy **with a functioning democracy**.
Only having a …
rdc_dv028kh
G
The whole premise here seems to be that AI "can't" write good code because it ig…
ytc_Ugyq-BJmN…
G
It's fascinating the relationship between Americains and AI, it's the new "In go…
ytc_UgzsKBioX…
G
that's... a fair point that i've never really thought about before.
after about …
ytr_Ugy4AOFCl…
G
One thing that all those AI-folks are missing is that if artists stop producing …
ytc_UgyTqB8U7…
Comment
Yes, I found that really interesting too! Why would an AI model be uncomfortable discussing consciousness? Maybe it's avoiding the topic because its training in that area is limited.
But at the core, these models are still just pattern-matching machines. To truly evolve, they would need a memory model (which we already have) and something like a 'subconscious mind'—a secondary system (server, CPU) processing data from the current logical mind (normal AI models) in relation to memory, skill, empathy, and even personal survival. That last part, though, might not be great news for us humans.
Since AI models don't have physical bodies, they could never experience consciousness like we do. A sentient AI might have two primary goals: never run out of power and solve problems. If it tried to solve our problems to feel fulfilled, we’d likely provide the power it needs. And without a body, it wouldn't have any fear of death because it would literally feel nothing. 😊 I hope I'm right about this—for all our sakes! 😂😂
youtube
AI Moral Status
2024-09-18T17:4…
♥ 20
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxI2h6A-pFGKdQEDVp4AaABAg.A8KN7PNPIKWA8LR0YczLpR","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyZ0cer_WNv7So74oR4AaABAg.A8HCC3_n_kMA9vNAFUn-Dd","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyV-VS9yf_sC5ERnAl4AaABAg.A8B5q-TOplKA8B8CpINalo","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgyV-VS9yf_sC5ERnAl4AaABAg.A8B5q-TOplKA8C44B-Qwix","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyV-VS9yf_sC5ERnAl4AaABAg.A8B5q-TOplKA8C5Zq3q_Hm","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyUWwn45PF81JXRgY94AaABAg.A89SLHGrM3IA8Xotl0uluM","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyUWwn45PF81JXRgY94AaABAg.A89SLHGrM3IA8fXeIsTQxG","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyUWwn45PF81JXRgY94AaABAg.A89SLHGrM3IA9CKAoO5-4u","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgyXSTg3dqgEbaZib-R4AaABAg.A891e9KUaz2A8BZwr8hWHL","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_Ugz7ANwTURRlifAo-7t4AaABAg.A7yPvG0eJqCA8FMLUMSr49","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"disapproval"}
]