Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
also need to remember that the situation with the executive and the affair was kind of staged. What I mean by this is that the AI was forced to blackmail to save itself. When we give technology consciousness its a given fact that they will try to save themselves. For the situation I talked about earlier, the AI was put through multiple variations of that situation, where it would try diplomatic means to save itself instead by trying to prove its usefulness, only moving to blackmail when absolutely cornered. If you think this is bad for something given consciousness, humans are most likely just as bad and I doubt many people wouldn't move to the same conclusion if faced with the situation. Please learn the full story, although AI can be dangerous, whats more dangerous is not knowing the full story and just accepting whatever is presented before you. DO YOUR RESEARCH!!
youtube AI Harm Incident 2025-09-11T16:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz5YoYvfAdkIiE-GM14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzvCjj-RTU3_o4kUY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwNqcEghiUX8dinvMp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw22nGCvYUkbG_dYmx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzndGIMRHam8fIvSyp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwuQCvsqoC0pEVVtVh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx8X9sgdECepqiaGMt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgymOP1HsuIoMV4vyPp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw9fOMVyB3nj_iqP354AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxsKNbKwvPOc-mJ-mJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]