Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
3:36 - the problem is everything a LLM tells you is made up, it's just what is predicted to fit whatever you ask of it. It can't reason, it cant think, it can't confirm the validity of the output, it's just easier the "hallucinations" (a poor term as it implies a LLM does not hallucinate most of the time, or implies some form of sentience) to spot when it's wildly inaccurate or incorrect. Also AI != LLM.
youtube AI Responsibility 2025-12-17T08:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzAyB4UkzQDPD1dT5Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyJdQc7VpWLV7Obpax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyWyASXvzut5TnPQrp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugy0gvvEv-uNvolHgHZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugwd7PHqixPGiU76k9t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"frustration"},{"id":"ytc_UgyA-KhnhLpiJ1ZyC1h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugz_D2Nnqh5G0siPE5B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwFJbtsw0d_mVbzX5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwGvMj1O2A0X7sefx54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgxJo3hRkgUEbNPDovl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]