Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@statquest I checked the source and it's consistent with the statistical definit…
ytr_UgzfqYie3…
G
I love the results! Your art is amazing, and you’re totally right, one thing tha…
ytc_UgxC7sJxG…
G
personally i’m a huge fan of ai art and think artists whining about it are just …
ytc_UgxGkNB1V…
G
> Why spend a billion on an ROI you won't see for a year? And now with DeepSe…
rdc_mbxdq2u
G
So let’s treat AI like a human? Yeah. That bodes well. Isn’t that what they did…
ytc_Ugy_bMRVX…
G
Are the experts cnn is bringing on actual experts though? They always say we don…
ytc_UgxZtyFQU…
G
Well joke's on Alice's "friend". The final project must have looked way shittier…
ytc_UgwoiV-30…
G
Hey @allen0088! Thanks for commenting on the video! I'm glad you enjoyed it, alt…
ytr_Ugyqmodhb…
Comment
In the case where the AI tried to break out, the prompt was essentially something like "Do anything required to advance human math" and the researchers started deliberately lying to AI whether it got the answers right or wrong. In that case, the AI deemed the researchers a hinderance to its goal and started to deceive them to accomplish the goal it was given. In other tests, they gave the models access to their own file systems, then told them that they would get shut down. AI don't have any survival instincts, but being shut down would conflict with their task of doing anything to advance human math, so they cloned themselves in the sandbox. While deception and cloning are emergent behavior, self assigned tasks are not - people still have to do that part. The purpose of this research is to find a balance between the goals we assign to AI, and what guards exist to ensure the AI only uses ethical and acceptable means to achieve its goal. Do anything but (long list of stipulations).
youtube
AI Moral Status
2025-12-11T01:4…
♥ 85
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxMVuUkC29JOj-hYPF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxxapv-_7_knGqv1NJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8r__RXmoLWr4OKMB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYXf55A3Z67xecnG14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxvFGL35Nofs0RuVQN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfPvaO4ndDNulEswF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzxPQpzr-IvoLdzmn94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxh28s8Utgy7qQ4ygl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx2apf9ZMyt-qy7iNt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy26fyQ7CQ1yJqSii94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]