Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI will never be solved because of hallucinations. Eventually the AI will produce so much data that it will shadow human made data. It will have to train it self off its own corrupted data and hallucinate more on top of that. Eventually it will be like dead internet theory and there will be more hallucinations then real facts
youtube AI Moral Status 2025-06-11T04:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw4_0jrPEWzN0wLAtx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzCTYcsa2_iQfw_xW54AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwL0LD5TH5V4HV3bDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxL5zzp4z6TC3YoLrZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz_5atK7A8m67tymTZ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxJLdqC8G7J4boEzZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwoK-5RApA_9dHm4Nh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyQfHtmGnw9kL0sbsp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzhm7qrR6zhgk5EWDd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx0eTvpA1ZU2D1fc6F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]