Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is good, but there is not enough talk about the methods by which to make AI safe. What does that look like? How is that designed? What are the potential impacts? Can it be an emergent disruptive presence to the ‘given’ future? No-body really covers this. Nor is it covered here or In this channel. (Love the channel!)
youtube AI Governance 2025-09-15T10:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugz4_2mqX3csoCAbRSF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwDHyFVDD8ruXFfeP14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyUAKVlx9Cy0XmG1ih4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyj_yHXjevqO-dlMi54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwJr09FpvsnxeOGw9Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwAWQar0830_rlZYgR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzRpeyyoSW8njIQ-MN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxANtxO7ihXJDeBtGJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyLNYb4YGTTFLndFld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxN7UAYxsicwAT7x9d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"]}