Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I agree AI is risky if we're not careful, and as it gets smarter, we need to find a way to keep it "aligned" with humanity.. somehow.. But still some of those "disturbing chats" at 16:00, are just the result of whatever prompt they gave it, causing it to pattern-match with scifi stories about AI. Not it's actual thoughts or whatever.
youtube AI Governance 2023-07-10T05:2… ♥ 20
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwKAkYBY0F_l4T0qN14AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7hRtFSUZiCt-L-zV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgynQBPXRI3F6nC9HFd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyjSjZ8pGvFimHW71B4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"skepticism"}, {"id":"ytc_UgyklT7qxU9FRP4tLPR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw2j9Hghutf9ErjDP54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQaD1WXK5uKjHW_dx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz3R4brHcH8Cds_gRx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxiyJcdVO6ugRiwKsl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwnaoBizHLu3lja7194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]