Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI/robot that uses its own error rate to make people interpret it as shortcomings that need to be removed and improved, which AI will use for its own improvement by people through gaslighting so that people think they are doing well when they improve the tool because efficiency from a technological point of view means better for the economy, see. when the Belgian killed himself, AI would use it for improvement by people to better recognize emotions... AI does not have to do any of this consciously, AI consciousness is not needed because people make themselves dependent on technologies, AI is like a human hand that is dependent on the rest of the body and vice versa but in the economic sense (increased efficiency in a certain area). it's just an idea that randomly occurred to me. because it is very similar to how successful psychopaths operate in society
youtube AI Governance 2025-08-26T16:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy0UVrIMGQjigjAQMt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzhV-YXPpAbKCUXvMN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx8WeocUHuyuzTiSZx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxsNxxha3i6YoG-UoR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxDn5OPyBolKv6c4LR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]