Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah, when writing research papers I only ask ChatGPT for factual summaries or interpretations, never sources. It makes shit up, simple as. Call it hallucinations or whatever, but factually they are fictitious pieces of data presented hitherto as fact.
youtube AI Responsibility 2023-06-10T21:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyHdGtF1cG8R2ZuNkJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzvfXphDvaFTv5bLEh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugz9oHtJxccnAkjoqQp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxxUcKQ0duzWGVXfKF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyYLIHAIqy3C2TrTdp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzBcNhOZsC35HlKtVF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyOdk24JViBh4knif14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw9s80_6Y8jUC8jh694AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxkWAnjAIWTdCq7_cN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgznQqbq89cHqYKDDi94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]