Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we do get aligned AI once, we ca hand the rest of the problem off to it. Let it develop robust reliable solutions to alignment. Implement a system to check if AI's are aligned. Whatever. That said, it isn't like we get loads of tries to get it right either. We need to get it right on the first try.
youtube AI Moral Status 2023-08-21T17:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy7FVSeUckDxDFkFlB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzTDv_FtVbLnXZTsfd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwrUS9-tXujx_nEQZR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLMSwhhkz5ErsWwbF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwExT8ZKqhZJ3nrwVd4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyR8PEa5Zru6x8qbWp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzACBaVcCtlB3k_4E54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQEEGjdY9BszgsCMR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw23nNp6RKYdqmZB494AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyIPLrB3JirK8MOddp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]