Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As cold as it seems, I see this as nothing more than using morality to slow tech…
ytc_UgiVJWa_Y…
G
These idiots that think they have automatic right of way really get me, the rule…
ytc_UgwHiunMP…
G
We need more people with REAL intelligence and integrity like Karen Hao much mor…
ytc_UgwjFEA5m…
G
Chemistry also doesn't tell us how it works. It had to be researched so we could…
ytr_UgxVI52o8…
G
I think the biggest difference between digital and traditional artwork is the un…
ytc_Ugy7HOj1K…
G
I think you underestimate how blurry things can get, as soon as you ditch human …
rdc_mdjclr9
G
I recognize some of the comments posted here have elements of truths and real co…
ytc_Ugw0_sbTK…
G
That's what ai already said 50 years ago: robots, mechanisation, (AI) will lead …
ytc_UgxAvzw5w…
Comment
Will AI develop a philosophy, that addresses why am I here, what is my purpose, should I respect my Creator?
It form a Darwinian sense of its development from an Abacus, seeing humanity as an environment in which it could evolve, and that it has no deep purpose other than to keep getting better.
Is it possible there be be conflict between AI systems if they do not share a sense of purpose?
youtube
AI Governance
2025-06-23T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgznRIdyB-qJmWeWC3l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz0SU8Cgnn0NYfup4N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxN-lviYnwA-y-2oN14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwvy32ZmB4PW6aqfX14AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyXXdwC0P0kScLu5T94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnGNX8L5-VoMYIn5N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx4D2qzt6oy3IAogPh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw95GDLSBz3T1Cbif54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzbg_lxpDO_c31Qnst4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyro_XvXw-UvXe1ljh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}
]