Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Watch Yudkowsky instead. Hinton, concerned as he might be, gives the viewer hope. Hope that as much money will be devoted to AI alignment as capability. Hope that it's even feasible on a 5 year timescale to align LLM AI systems. There is no hope
youtube AI Governance 2023-05-14T23:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzf1x-wj5-9HtClcnZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2oEZP1ioTeyrQnIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxVzHLaMyI0mMU238x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyi9ZyNAeDBsa5Bx0p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxz2XdZeVcpTj1Qwfh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy_huqybHTx8btzEVZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxTy8ysbLocR7uZzzZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz5S2KXjsc5arBhHe54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"resignation"}, {"id":"ytc_Ugyopd9JLLnFI4ASzpd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyGj6HA_CUMtx65Eah4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]