Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I agree there is a better way to get to AGI and they are brute forcing the current architecture. What I don't agree with is that GPT-4 doesn't understand and it just predicts the next word. I get that's what its design is however its able to out perform that current design. Predicting the next word" alone" doesn't explain its out comes. The growing consensuses is its building some type of world model.
youtube AI Moral Status 2023-08-21T18:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwh1YqHGzd7HeI9q2F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx5dpslKC_pzc8cIKR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzQBU56yTBEzpB5Fwd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwo-SZ2WAUWjnBlQR94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw6dst-EFe_1iKrCQ14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy5NYgj7ubcgRs14ul4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxJIXmO9UVgXmU5Zzt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwVIf77ZXhhZCzF6v14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwBqvhkMld3x9OaPZ54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzag86SZ01DgpnqP754AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"} ]