Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is definitely safer if there was no AI research until Stephen can prove it can be done safely, than if there was unlimited AI research until Eliezer can prove that it can't.
youtube AI Governance 2024-11-18T22:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx6qIp5l_aI9AfElqp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy8QpWKbxQ9n7H_aAx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx7QO9apJwSwFJJDFd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy_7LlslQK-jD28V2J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy0yNh2c8WMFRwdpqJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyEHcmpJN4NHiXUwF14AaABAg","responsibility":"user","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugymnp8X3WB_5uO6CpF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwcGbsvaLER3kwY4m54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx2VcfgXyVMdW6L8P94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw5nG423IJW_SdQh0V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"resignation"} ]