Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
But bhai what about human errors. AI will halucinate but ultimately it will not but human will always be prone to the error so i will choose AI OVER HUMAN
youtube 2025-01-16T20:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugz603xNBxz_dfYs8n54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyFTsHdt6te4nne5j54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},{"id":"ytc_UgzEhDBsgWO_xVBftP94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwpjgqU-BUTnSdWo1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwVizKzTu9cXko4M9t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugxk4P3jzjZEQJofFQF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzFOuHG6BtkkEiftcZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyCigQ2S2wp2HvebBR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw0NdLkad3299qMmPd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugw7_WDr8-4t99bBxj14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]