Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Incredible, thank you all. Sum, narrow ai (chess) AGI (artificial gen intelligence) super intelligence - smarter than humans in all domains. 2027 - agi. 99% unemployment. 2030 - smart robots. 2045 - singularity, development of new tech in minutes, humans will not understand. 2100 - free of human existence or we will not understand anything. Impact: vast unemployment. No re-training. AI can create wealth however how will humans respond to no work? Maybe crime, pregnancy rates. Governments under-prepared. Companies have no legal / moral obligations save make money for shareholders. Don't know how to make it safe or fix it. They won't figure it out. AI is a paradigm shift, a meta invention, won't be able to turn it off. Done right ai can solve climate change, wars, done wrong, humans will be gone. Ai creators do not know what's going on with AI. People leaving Openai to start ai safety firms. He advocates focusing on narrow ai uses, not super intelligence. He thinks we are living in a simulation. He's into longevity and bitcoin.
youtube AI Governance 2025-09-04T15:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzTFC_CMP3_4hQgRfN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxC45CBJUnsUkmhN_h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxnnX_e7hIlgQmnM3J4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyKirEQPRHbVdP_Oyh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwq0BtUmYcHmkZWpv54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwC5AdQwvZMlPABPwJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx6yvcWORR6JPGl6b94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyFIIWDe0sMTgUH8a54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwlZ7_GaP5iGqF7gq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzt3XrwZzxVnQJu1mt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"} ]