Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
About recursive self-improvement, isn't there some way the AI can do research but if it wants to implement changes to its code, it first needs to present the changes and explain what they do to a human, and only if the human approves will the inprovement go into effect?
youtube AI Governance 2025-08-27T16:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionunclear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugyh_ZZdII442fvZzR54AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzSzEnMjXwN1Cr9e714AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzXVBSVTIxe4C3b8eF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxv_UEXyhKZ7R9Xii94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwJi5_4gIi6KCzs4dJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"unclear"} ]