Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Some things about AI right now is that it can't actually have an opinion, or really think at all yet. It's still in the stage where it's just an algorithm that can mimic "thinking". There's absolutely nothing to worry about this kind of ai becoming sentient randomly and deciding it hates humans. Most of what they say in relation to feelings or opinions is all completely fabricated.
youtube AI Governance 2023-07-13T20:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwz3uHg9a8vJZ7ugs14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyLAAInunAx6kvHPZ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzsclS5QhS8ff4SYZt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzHGJv8RBHcGr2qARh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz3QAVwgCINwJ0Zx8V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzeE4s8kv_mhDbhCWZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw6ufeTL0JGZ4RGPVZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugwb3xo_Htid5kUTQ0x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxq7QwUCgdfHXFwq0l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzIbKLFp4lD13xa1UZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]