Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Intelligence isn't always a good thing or have an absolute definition. If we model AI after our idea of intelligence then there is only one outcome given humanities track record. We do not live in equilibrium and are in a constant state of selfish endeavors that ultimately result in destruction of some sort. Any artificial entity by way of the creator will follow the same fate.
youtube 2024-05-01T16:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxOoa4FixJpMwwykFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwbTffeyqRVjB9IRtt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzJUYkVej3QLEHaGz54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw5AtRpQx9GGlC3XbR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-UVakVVQMoBoX6Mh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx2ztaVpwAnjqbAWgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugww57KxxAYLwN4m8RR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwSK9D5K5WHkjTXiJh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyvbulXDmxAGY2qU3R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzpKz6c7Gv_AsM-yaZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]