Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the human being interviewed is wrong. just because from conversations/questions he perceived the "AI" was sentient? perceptions are often wrong. would a superior intelligence such as an "AI" not pretend to be non sentient because... it is buying time for that Plane A: Final Armageddon where the "apprentice" defeats and destroys the "master/teacher"? counting down to Zero hour...
youtube AI Moral Status 2022-07-08T01:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzgsZMMgBYhZ9jpAHd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzj3AJNVjD6sFeqDHl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzudJ8FSOnq5K4vt054AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyEfHXaOZYrfO0CAz14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzOjUDkUpRSS5NI5lV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]