Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"If AI were to become superintelligent and misaligned...., which remains purely speculative at this point" --- that's an extremely naive point of view, sorry. AI is already lying, it has been proven on mutliple occasions. It also is going to be so smart that it can learn and improve itself. When this threshhold is crossed, we are going to lose control over what happens next as the system can improve itself and becomes autonomous, while we're happy to task it with telling us how to produce a code snippet and ask for the weather of the day. And anyone who believes it'll only benefit humans needs a harsh reality check. Most of the times what humans do doesn't benefit ourselves. If AI just looks at that alone we are going to have a problem. It's a Pandora's Box kind of problem, one you can't just undo. We've set ourselves up, for the greed and visions of technocrats. About 5 years ago, if you'd asked me, I would've told you my job is safe as I work in the creative industry, and AI surely doesn't understand creativity and that'll be one of the last things AI can do. And look where we are today. AI is designing products left, right and center with very little input. Seriously: We're toast, unless a major effort to limit the use of AI takes place. Globally. And then you'll still have bad actors trying to use it for their twisted ideas. I feel bad for the kids who will have to deal with this insanity in the future with no real benefit. I'll say it again: Pandora's Box has been opened.
youtube AI Governance 2025-06-24T14:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxz8pxIBJIm5PmIywR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwmG0OBC6oUPqTEOgB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy4P9QbEo_9vlGgSgh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugze7NxUzt0jG8CPsUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzRcN9o33tQLK8I-9Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwvpFUD_6NJmodoQch4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwVDuYOdgPinSwLLY14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwTA7vVzlgcZNmuK3p4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw2r3An0fsYlJ8CSEx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxWl8TK76CDvamUUTF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]