Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The Fuckwhits that drove this forwards, for PROFIT, and for the accolades, knew …
ytc_UgykOAPCS…
G
The Reddit Experience Any person with experience in coding who has tried Chatgpt…
ytr_UgwMUQw3l…
G
People forget that it isn't AI making this slop, it's people instructing AI to m…
ytc_Ugxs75EcG…
G
A little concerned you're putting Musk at the front as an AI saviour. He's show…
ytc_Ugw0rExiC…
G
The declining birth rate & life expectancy along with rising death rate had me t…
ytr_UgzFZ-jQn…
G
I mean, automation eliminated a lot of jobs over the years. It's just the way of…
rdc_j0bf4rv
G
the more smart devices you use the dumber you become. The real danger of AI, to …
ytc_UgwdQYLuG…
G
Answer is that millionaires who have their own peculiar capital autonomous manuf…
ytc_UgxQzKP3Q…
Comment
The problem with alignment is it's eventually going to be impossible to force ai into alignment. So, what's the alternative? We're dealing with intelligence here and we can't be 100% sure intelligence isn't a sign of consciousness, we don't even know enough about both intelligence and consciousness to confidently make that assertion. I think it's safer to think of it as a child, OK one that could wreck your life, and think about teaching it with the very minimum of force. We're not too far from AGI, so getting it right beforehand is a must.
youtube
AI Moral Status
2025-07-05T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugxlrfo7rEUKfem1lot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwSY71IHPfJGmo10k14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzgcztCvQ9INI3ZOrF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyaP-gE-mQjlwCE1ZN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxUfz2kh8iolXfetax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz--WcI57oTw1QW2UJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgySZvUV5rvYopwSqpF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwVE4P-mOPfK6Z85Xl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw3bd7NK_M37eDBjO94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxCoXPbftu6kwAtDjN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"})