Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What a BS talk! AI systems do not know what the meaning behind words is, because they are just tokens for AI systems! AI systems cannot doubt and therefore cannot identify logical contradictions. AI systems cannot think and therefore cannot learn by itself! That's why the LLMs of AI systems need to be trained on such gigantic databases so that AI systems can atleast make an impression of "intelligence". But AI systems are not intelligent, they are just heuristic forecasting machines juggling with weighted probablilities! If their training databases do include propaganda so does the answers of AI systems. Therefore the old IT rule applies: Garbage-IN => Garbage-OUT ... ! ... The quality assurance process for these gigantic databases used for training of LLMs decides about the overall quality of results produced by AI systems. If the user of AI systems wants to be able to judge the quality of answers from AI systems, they need to know the correct answer already upfront. And that's the main reason why AI systems can only in the hands of knowledge workers become usefull, effective & efficient! Otherwise AI systems will lead to pure confusions of the people!
youtube AI Governance 2026-02-03T07:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyban
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz-lUOWbfj5_qOx5KR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy4gQzDi7GzQEZDTpJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwq_yggVVC6kLpiaFp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzqZzJUtSk0N5793Wh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwxY_I5IMGahgFndZx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy26ACmaYK6CdWZHW54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9L5p27Tus0rr-wWN4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz39CaHQ1FmOnw1LB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw6sqQWWNZJRrynLfJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyAKjwoJPFJA1ZKvvJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]