Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI only feeds from the Data we Human generate, I think it will eventually reach a point where it can no longer learn because there will be nothing else to learn, only extrapolate from past data but the danger is what's the limit to that extrapolation??????
youtube AI Moral Status 2025-06-05T11:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgydUQVlYzfVMHCEmg14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzM_4oJAEKpurs0oZF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzDIQaW0jnC9SUtji14AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwM08YnuS6lkD96erx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyZYLQlfYABN0rybYV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyke6VRoKGd1bGG-hl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw3b1jIxjcARDUkIiF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw1Qx5nQs4kVToPdVV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwqqAMHI-66mAQKswB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxLn_H9OHvADRkdGQV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]