Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh, this is not new. "Managers" who don't and can't write or understand applicat…
ytr_Ugzz4cXxM…
G
AGI is very likely NOT around the corner... the trick (for survival?) is to conv…
ytc_UgwzuARjb…
G
Not hard to be pro-jobs, is it?
Why's it so hard to be pro-AI online?
It's almo…
ytr_UgzoedzId…
G
one time i experimented by talking to a bot i made on character ai (of my spider…
ytc_Ugztb5F7S…
G
@buttonpusher3786 Yes, feel free to copy and paste. There is a lot of baseless h…
ytr_UgxPlxB3d…
G
I have soft where they claims that the face recognition in the neural network ca…
ytc_Ugx6j5hGJ…
G
If robots become conscious, does that mean engineers and electricians will becam…
ytc_UgxtaiVlU…
G
I just don't feel anything when I look at AI art. There's no excitement about s…
ytc_UgwBjv34V…
Comment
I think there is a language problem. We have a word for the type of intelligence that Roger is talking about in ML, it's AGI. We all agree that AI today isn't that. Roger is talking about AGI and interviewer AI.
AI is, by definition, AI. It doesn't need to understand to be itself—that's inherent. It would need to understand to be AGI. The discussion is really: will AI ever become AGI?
There are some points in favour of that and some against. At some point, we as humans didn't have consciousness, but now we do.
If you look at most very difficult problems in physics or other sciences, we often solve problems before we understand them, so I think it's an important step. The wheel was invented before the rules of momentum. The photoelectric effect was discovered before wave-particle duality.
youtube
AI Moral Status
2025-09-10T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyxGCD_X-dPwkmxcI54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxz-BAbIRfP8hD_4X54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzc5lkWoeUcwqw3oRZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZe2dt4Gv4NhyTZEV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzXjddOdAhwUAcQ1-14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwByju0kxeUBGiHfch4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwP5mcGO_hOuzQg0tJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZpOzJbJSIkJng31F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcHwIHl4ZC1-mMje14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxp1D3Z-lpq8cib4QV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]