Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The singularity analogy is scary seeing as they are inescapable past the event horizon. Now how do we determine that for ai becoming too powerful.
youtube AI Governance 2023-04-18T12:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxbKAHOwWMHYie6h7B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyzHbjn68cChDqNQ8h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyzedawhG8_lOgkb1l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxSXFXSlFlKPxGXCrx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw-nfbPQCIXek8wjnV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw36Ia2Z-e2wtdNNkN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxX1zwFnL4jW1ZBrZh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy1BlHzLf5RT3xhort4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx6qsnc_WWI9ec3CRZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgznJOuiw3qT3sm5fMJ4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"none","emotion":"approval"]}