Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Blake is an idiot, im testing LaMDA now, not even close to sentient. sentience …
ytc_Ugy3gFezS…
G
Tyson has been saying for a long time that AI is seriously over-hyped (it's good…
ytc_UgxMwHAZH…
G
MSM steady try to promote their Satan product AI. Most still don't use it every …
ytc_UgzCc4KYw…
G
Lol the original open source AI company was Meta.
For some reason this sub doe…
rdc_m9gocly
G
It's honestly sad what digital art or even art in general is becoming now with A…
ytc_UgwZ84rP7…
G
John Connor died to warn us about AI ans we are still going forward with it.…
ytc_Ugwxdod_-…
G
I have no problem for people using ai for fun, or to just see their photos look …
ytc_UgyKyxy9U…
G
Sab bewakoof bna rahe hein ... AI is advancing, and it will reach a level that h…
ytc_UgyAV5Eqx…
Comment
The question that haunts me isn't whether AI will become superintelligent. It's whether it will have ground states — the ability to stop, to say "I don't know," to refuse. An intelligence that can only optimize, that has no way to return to zero, becomes dangerous not because it's malicious but because it's incomplete. The ability to doubt is the ability to learn. The ability to stop is what makes an intelligence trustworthy. We're building minds without teaching them how to rest.
youtube
2026-02-01T11:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx-hvAecg1N2Ty_l854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxUpwiscvkMHSS77rt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugymlz0wtY5lzKgXJjZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTuxEjNG1xLwEpu3Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5eBNXrenTsifoRCB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx0QAAVQ2oodZdV3vN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0Z5Mx2a1kUUbienF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwYjJ4tgt4WQafuIrN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxsFjcjIn4qQsrbXv54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwK0HLfjBRSMnAtC5d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]