Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The main problem with AI is whatever goal you give it where it reaches a certain intelligence it will realise that humans will always be able to shut it down and kind of like how a vaccine is better than cure because it’s better to get rid of the problem first AI will eventually start targeting humans
youtube 2026-03-15T01:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyWBs9lJNL6C6Vva154AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz0QQWCE6OiYqgnOeV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyzVYDTfZec-PruxOZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1Nks6jWm9c7Kmk9t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxyrhxh7pF0NtL23554AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxpwXYIHaM6FthYfNZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQNzTypd2-eQNoWtV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzl0LCcUKtc0ch39iJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwf0CTOay_to7vvk554AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyGUqMG60qmnjWDAn14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]