Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It looks like a diverging economy vs obliteration. The AI conglomerates will be …
ytc_UgyhhbmDt…
G
@VforVirtual as for my second point, i think i expressed myself poorly, what i m…
ytr_UgxAJlN8a…
G
Some w b lucky , worthy to escape this prison planet. Yes the end is imminent.…
ytr_UgybD2Ql-…
G
All of this robot fear is due to people believing the robots would suddenly take…
ytr_UgzhKL5e9…
G
You spoke about the democratization of control of this AI change. It's democrati…
ytc_Ugx29wcvX…
G
What if the robot is water proof I don't want to be killed by robots…
ytc_UgyxRAMIV…
G
I didn’t think people were actually brain dead enough to have this argument the …
ytc_UgwT1Gpy5…
G
That's how LLM's work. AI is unreliable by design, which is why it's not going t…
ytr_UgxC3iaKS…
Comment
If AI cannot achieve consciousness but becomes almost infinitely computationally powerful, then will that not make it even more dangerous? No consciousness means no conscience doesn't it? You cannot program it.
youtube
AI Moral Status
2025-07-27T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxPSzGq1RdwymG5rw14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxyq7nSAcIItaBfa554AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSuNv4IVQFrXQbyK54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyObb8zg65f9RBlJeN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcnNsftUoUZ4lz4pV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyqVvLtCDxPcWC5DhB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxaxYizbteYNnn3EKV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwTbhdKe9BqlgoO_q54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyyGkg_JDvqo8TO5Wx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxl4blcyqZozewMUyF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]