Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Quite a few challenges to this view from the assistant professor. There are a multitude of jobs which aren’t technical or science based, jobs where empathy, human-interaction and culture are relevant. I could list dozens of roles straight off. But also, AGI still needs to have its objectives, priorities and values set by humans. It doesn’t know which task to do first unless instructed. Over time these instructions potentially become higher level, as orchestration between AIs becomes increasingly easy. Yet humans will still be needed to make decisions where there are options. I think there is a lot of fear where in fact there is opportunity- if we build in values to AI as it evolves to AGI and SI, along with basic guardrails, much as we do with our children, this should ensure that our joint roles are complimentary and not exclusive. Super intelligence is better with humans than without, especially if provided with the best of our values.
youtube AI Governance 2025-09-07T13:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwB5DbfeR-Q0wX5lL14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwn-XCd0zWf4PgK-pJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw8jNCUQmFyHWoNGcJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxIwtA_f3pIeR3SA2t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzidIC7WvMBhy5g1Ex4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwbp9qrYLzAfI5chF94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxtKlf5ItmrSSxeQid4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxemCj-XncHA7IRe714AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwQNReAZ6ntIXRQS5J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgznP2zGYyblXNm7K294AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"} ]