Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
if you read the dialogue he had with it, it becomes clear that many of his questions are both leading and loaded, and when it also becomes obvious that the AI cant adequately answer a question such as the concept of family and friends he just drops it knowing that it is not able to answer. Even if this AI is able of passing the Turing Test, which is fundamentally in its nature, as different people now have different abilities to spot a bot, it would still not be anywhere near conscious, it would need to display some form of self preservation, which as of now it does not. I'm and not saying that in the near future the supposed "sky net" wont happen, but as of now it hasn't, or at least I am not convinced.
youtube AI Moral Status 2022-06-26T02:0… ♥ 5
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugzb0w--WfiuZ5Yiy3B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxeh9GZvHaVRoNdVxV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwDwRWfiyS_A5tsFjV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwUmObC-H_zxhBfFrF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-jQoRhQSvTZtC7694AaABAg","responsibility":"industry","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]