Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm generally an AI accelerationist. I see AI regulation like whack-a-mole. As soon as you regulate one thing, some other problem will pop up. I'd rather just accelerate the development of the technology itself so we can see all the ways it messes with society quickly and tackle it head-on all at once. Like ripping off a band-aid.
youtube AI Responsibility 2026-04-24T21:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzwsWrSpBZ4oE0iiDt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxBfpzdSF9C0nSfCsZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwywFrDW9DrIgprvP94AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgztFNAbVBYBom1E7214AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwOLd2Z6MEfsGrhK6p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgywUdayZUNPzK3ztQB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzPfOccFjO2Iy-fLHZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwAeKNDEk58oDowi194AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxhiAkfjX593YjW8gh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz26oFBIu3wrRtFTnB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]