Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI, no realisation of the outside world, trained on internet data --> more internet being covered with AI stuff --> AI trained on increasingly AI Generated data --> AI peaks on validated data --> AI quality gets regulated by its own faulty feedbackloop --> AI declines in usefullness.... AGI x000398 error... wont be here based on LLM's, but scaring people about it sure helps the narrative and keep the funding as long as it lasts.
youtube 2026-01-30T05:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxUl464t3mQXSKlpVB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyfhWlJ48hiGlJZvO94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyb5UFN5IMQs4vyvAB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwLg0Wpc0f3eQmmKRR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyuPmOQyYgcEaYxxp54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgySgGIR72EPVIoeFcZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxR0aWwPzEcWx2INd94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwiWJXMqft-v0pPA2N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwCRUnOd3pYRDpZGUZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzRQMUHq8gJWH21BER4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"} ]