Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
6:28 - analogy 7:16 7:51 8:19 - sentience, sapience, consciousness 12:20 - consequences of not knowing consciousness 12:44 - ethics 13:50 problem of alignments 14:28 AGI 14:48 alignment problem ethics 16:04 they only have to be lucky once, we have to be lucky always 20:00 ethics, problems to solve and main questions of this topic 20:51 how ai ('s consciousness) would be different to humans' 21:36 - once again the question of analogy
youtube AI Moral Status 2025-03-25T22:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxJ9sxDPKLBjPXrYwx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgydA7tp2MkxeIhptXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgylY6TsVHiY8enguxh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgytJpX6jqTxIJK1-554AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugxn2Nc1VIdveD7BDxF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyqVm91kkOdaWPtRTN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyEoEYa9RzMqXTaHKd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzM2-pi5ggXtdn4Tth4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzrfIVw5-WrNjABgut4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_Ugz6yCQT4TzJsTBdpQJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]