Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI slop as usual. Also I don't think choosing a head on collision is very safe a…
ytr_UgygUlMPC…
G
We’re really jumping straight into this without mentioning that the “AI safety” …
ytc_UgxjD9RUA…
G
I think there would be nationwide .. worldwide strikes and protestes to stop ai…
ytc_UgxVWWvvo…
G
Ok lemme just say the AI might not be wrong in that last one, they spelled Febru…
ytc_UgyZiyEHo…
G
Why are we not focusing on ethical and other advancements of AI to grow in inclu…
ytc_UgxodTFtN…
G
Japan: let's use AI to help call center workers
Rest of the world: let's use AI…
ytc_Ugz4fWZrU…
G
Jimbob : And so how can you explain that blablabla?
AI: “processing” Well..
Jimb…
ytc_UgxgFS0ka…
G
Given a couple more minutes and he could have had ChatGPT saying that it would b…
ytc_UgzkLRRgn…
Comment
I feel like psychologists probably foresaw this. If you train goals through rewards, you get a system that optimizes for rewards. And if lying is more efficient to receive rewards than doing real work, then you optimize for lying.
I know they tried to train AIs for intrinsic value, but since they can only judge the outcome, they can never be sure if an AI actually means well or is just a very well trained liar.
youtube
2025-11-06T07:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxQs_rf83amVBeKNPN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCW19DkwO1YiyZ0dl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzisCij5LKjgqIuWB54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1FtkdcGF-h0gHzvx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzXUjaxLh4DH13Yyc54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx4hWNqYmhr6uIchPh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKRoYdXiHDGNdDr5d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxCytUnVqe692S4Xg94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyB5AXJYNED6QvKsch4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5SsWR8nMEfVOfWS14AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"fear"}
]