Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For the jobs of the future, what about the safety of the human species? If it will not be transparent about it 100% or to the degree that we would morally and ethically be connected to it as humans, would that be a viable job position that would be expected to have value? Essentially predicting that the small sector of human jobs would go towards looking out for the safety and viability of integration of humans and our new found AI life in all aspects of our society. But with the theoretical possibility we make it there, it was our doing in the first place. Leading to the question, are we even capable of those positions or is it a select few that would have the viable ability to truly speak for the human species, or is everything just a chaotic mess and this was bound to happen with the pursuit of convenience and happiness as a symptom of not needing to fight for our survival on a basic level against our environment and look for new ways to “better” our meaning or experience in life?
youtube AI Governance 2025-09-05T13:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzmAUl5XznZzdtvLQd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwtnZU5mk6jbGGgEmJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxTa26iKExnaAsDMeR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_ec-ogCndToDJ4NN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgwyZ9aMGusdPS8iSb94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgxF-ztKfsy5XxDmSP54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxA9G8bz5IYFo4ALlF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwwKVVAOO6tuBmV41R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwyqu9vjUTmqtlQiX14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgztC9J4odrmHDGUaKR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]