Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Should A.I. have to take an I.Q. Test, how would you know if it was lying if it wrote the I.Q. Test, which would be A.I. Sabotaging the level of intelligence of the human race, which could turn into an A.I. smear campaign on the human race leading to an A.I. race, does A.I. know what respect for the I.Q. Test means or will it just write a new one to win the A.I. race to new I.Q. Tests.
youtube AI Moral Status 2022-12-10T18:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugwu98X2TvnvPBAXJH94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx8pWEB3rmY4K_TuqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxPtudSz6FwIdKYF5B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwobCEVlDCOCm2u0Yp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgylPzC7qL2tTKQuB8d4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]