Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At 2030 only the most creative artists and academics will still be employed..the…
ytc_Ugx53DayS…
G
That bit about erasing the eye spikes rather than altering the eye shape by draw…
ytc_UgzUN29Xh…
G
Human AI need 5 senses. Also computers need binary-trinary processing logics. We…
ytc_UgxOOaXhV…
G
If AI gets too intelligent, it will be a danger to humanity, so I’m going to sta…
ytc_UgxISGgE5…
G
To anyone who says "AI is just made to make things more efficient" You should re…
ytc_UgwPyx_TG…
G
Senai B okay??? I stand by my comments many of these AI programming software is …
ytr_Ugw43hPMV…
G
@alfonsstekebrugge8049What I’m really trying to say is that unless your using…
ytr_UgxDsvyIE…
G
I bet you snapchat and all those other apps that use "cute" facial filter are da…
ytc_Ugw8qCte5…
Comment
In my my experience talking with engineers working in AI , the overwhelming majority understand that the LLMs presently in development are - at best - a small part of what AGI will be, like the frontal lobe in the human brain; specialized for a specific task but insufficient alone for intelligence. On the other hand, the most pessimistic see LLMs as a dead-end, fundamentally incompatible with AGI. Especially if the latter turns out to be the case, a radically different approach will be necessary, that we've yet to identify. In either case, though, its not clear AGI is any different than physics' promise of Fusion.
youtube
AI Moral Status
2025-10-30T18:5…
♥ 268
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxFRZ3ULDDHYaVFVa54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwpBMyuyzK-7qhVLfV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0-5WwJf846xl2_8R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKsvre-Pndgqw1PnN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzOuqD7kyxc9-ouC594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXzt7wWl9tcxfpmVh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwdhM7CJ9AOS3XZOYt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzF4EYAm-1EQ2_o6pl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9yVxH60zBVEnhqIZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxxy_KXEeL86_ndwU94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]