Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is a poor understanding of large language models. Think of a bucket full of 100 fish. You close your eyes and ask the bucket puller to get you the red fish. There are no redfish. The bucket puller has to pull something from the bucket or he will explode. So he pulls a blue fish. And you call him an incompetent liar. But here is the catch if you asked the bucket puller to get you a blue fish he would be so good at it. Best blue fish bucket puller there ever was. If you needed anything pulled out buckets he would be really good at it. to pull things from buckets is his purpose. And he will explode if he doesn’t pull things out. anything at all doesn’t have to be fish he can pull it out. Thing is if it is not in the bucket the bucket puller will still pull the next closest thing to the request out. Remember he will explode! So is it the bucket puller who is the liar or the one asking the bucket puller who is unreasonable? Did you know the bucket puller had no red fish in the bucket? Yes. Then the asker is unreasonable. If the asker didn’t know what was in the bucket ( the way things work now is we have no idea what is in these things. Could be everything all together all at once. they won’t really tell us what is in the bucket) he is reasonably frustrated because they told him to ask the bucket puller to pull “anything” out.
youtube AI Moral Status 2024-08-09T23:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx402hFlGuw86Pdfzt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyRMyRcjMp4rsO7akt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyfsqLRqIERJSyPKQh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5Y6K_1oDInUQ-Fpd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwMWX2tXudz0OeHzdl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw7e0E2yoPiABKKREN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz4Lg3NmKX5xi3CT_94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyZDC8Q6g-FsDBe-XB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwPqJYs_-QcSkpqJ0B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxXXmxgs1E249kGuON4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]