Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Can you do a video about safety and alignment and a speculative extrapolation for what various safety and alignment researches deem as being ideal rules AI should follow and what the good and the bad from such rules could be when infinitely extrapolated.
youtube AI Governance 2023-12-31T18:0… ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxYlJ9eqa3WImOownZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw9YNbxIW3Yxt4gP5Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzUpruFDjT1glcxTIx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyqBkgZ-Ps1kXq7O694AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw3gn-AnkCzmr1K29t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFpMBHRnnUcReIEBB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzqt8_jdPvKYCi-SM94AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyAUh_vKuyewkzyRQ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxAyB-HSV3KqeyTPr14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgycjZNcYmbVUKL7ach4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]