Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We do not want Ai to do things we do not understand, you want it to teach you how and than work together to archive your goal. The moment Ai does thing you do not understand, you miss the opportunity to learn and things get out of control. I think we are limited by the way we learn and underestimate what we humans are capable of. Great conversation!
youtube AI Governance 2025-09-05T05:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxpVgWmGoJyARsmaxl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzA_XDcXYbP9pulHw14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwAvudo8WdxvuEOIyd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxGII4nsOB5n5rcwJ14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzMAV_NNraj8qXBapd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzOh2WsptWSTmlHjRh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzZcrDNmGybBc_hPPp4AaABAg","responsibility":"ai_itself","reasoning":"none","policy":"none","emotion":"fear"}, {"id":"ytc_UgxBbtg8kE7-P6xvFSl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwlJZHhqriFWsH-UDl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwzC3aBA_kla91t8rt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]