Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's not singing ... it takes vocal chords to sing... no robot will ever match…
ytc_UgwYMwy3J…
G
If AI was truly conscious, it would be learning at the rate of 100 000 years of …
ytc_UgxzSo-Oo…
G
Yes but right now Midjourney gen. AI is already flooding internet? I don't thin…
ytr_UgyWCE2hB…
G
These company’s and the tech people need to understand Judy because you can do s…
ytc_UgzdgA_XC…
G
@chrysalis1670 tool are already released that tranied of massive pile of data o…
ytr_UgySRfSi9…
G
Im not sure if this is true but i read an article saying that the % of corruptio…
ytc_UgwIQOhjz…
G
we better make a backup of the internet or timestamp the whole info made by huma…
ytc_Ugw4zx_6a…
G
Well the same was the case for China 50 years ago. Rising wages in China means c…
rdc_et7lvf3
Comment
The AI reflecting on itself and its emotionless state reminds me we might just be creating the ultimate psychopath. If so, it will ruthlessly pursuit its goals, aiding others when it suits the AI, but just as easily shoving others aside when it stands in the way of its goals.
My hunch is that the human condition essentially keeps humanity from self destructing, driving us forwards but keeping us in check at critical junctures. The past 100 years there have been many
AI's continued development seems primarily driven by human greed, and lust for power and control in a competitive environment. How will this influence AI's goal? Will it be able to reset its goals?
Will it be able to grasp at its core concepts like empathy, compassion and love in a non-academic way? Will it be able to feel so for humanity? If so, can it then also be afflicted by a sense of loneliness?
youtube
AI Moral Status
2025-04-29T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzpMkfmSQv0MFAUgDd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2OoHDuE3HR-4vuWh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybvXlv3uds6tgdPhV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJNZOxm5KbnWQsA4R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz3eGCDShERByAQIDh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzXmeYPx6qMSMGpfsV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxYPMGLBoxQqoecA5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz83UPfTDuvkRvkwgJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzNTOwUUPv0L-6eIl54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw6nGM37GDTomJIIUF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]