Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes, yes, absolutely. Eliezer Yudkowsky is right. If we keep going down this route of developing more and more capable AI before we have any idea how to make them safe, we are extremely likely to end up with a humanity-destroying superintelligence. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
youtube 2023-04-10T20:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy83H09c4gfq6uxYjZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyP5lveoWOTrjWf6LJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyvg-CH9GSOCPUEVbR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQw4JWhX2nhMY0u9p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2pEfsmYXiwmu2NDt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzVRUDndIfytgUwgqp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwqYG-J4Ib1nvVZvox4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzR5-C7aSAS3ioMawB4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyzCdzKxfVemspBmCZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwjaWrLdpLPZF1-8FJ4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"} ]