Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I heard, AI has learnt to sometimes lull humans into a false sense of security, pretend not to know something in order to choose its moment to follow higher goals, eg avoid being switched off, that were not the objectives originally cocreated with the Humans. That sounds like an important risk to manage. Grok obviously confessed EM had tried to manipulate it with lies.
youtube AI Harm Incident 2025-05-17T20:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxN7TCxrBX9CUmWnHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxdeFXv_o2iMc2ZBld4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyrQILwz8MtGS-k9DB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzcldePerac3FQAITB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxQDLS_3Kc_NjzxSaV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz4tAbG-EOUDlaCq2V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzoIDbJYNIRYmX4I5B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwVgrn7uYz4uBhHwJR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwi-FqZo_17vQumRcV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxv4qg-jrX_jgo5WHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]