Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just sayin', it would be unlikely that humans will be destroyed by A.I IF we all don't have differences between eachother so while developing these machines, we wouldnt make ANY mistake that could possibly lead to humanity-vaporization. This is a crucual point cause if we work it right now, we might be saving the world
youtube AI Responsibility 2024-08-23T16:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzB2pwt7p2e5-qWHxl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyskzGgEFCl2WAlPzJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz2B6iHgNeg_pQzALR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyvVT8GkVfzynkx7VN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwYCIs3t8N9sEVqH854AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyXQPD46beU3gMmn3Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxeJ4xeWilpWBlwBqV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzMIAwrqv1JYVtZVVB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxKvYZbOby7VEcjEaB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwke_Q4r2rzWMr2fV14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]