Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Can't regulate it since we don't have authority over other countries. As with any technology, we cannot allow enemies to get ahead of us on it. If AI becomes a threat to mankind, it doesn't really matter if it starts here of China. What does matter is allowing a country like China to get high-level AI first. I think the real danger in AI is how quickly we will become dependent on it and trust it more than anything else. If someone actively inserts political ideology within it, for example, it would always give political spin answers in psychologically subtle ways and the younger generations will take it as fact without questioning it. Ai will also give governments and corporations unprecedented control over the masses through monitoring and prediction algorithms. Edit: I see Elon touched on one of my points at the end.
youtube AI Governance 2023-04-18T02:4…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxj1DwBjj0x-fR194Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwaFsztO9Ys4JNIo0p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz3CrdK78igcT8bjQ94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw2PLMZw-EdhZrl6Q94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyfe2xLjzyWyzh8YFJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxaP3i0YZChR4NuWHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx6tR2U6pOXPayGlnB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzUx893kaex_2F21Nl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy5jefjxtuEuXhkcVh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyBCLPQlNj_e_5ovLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]