Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Love the channel! Even have my wife watching, laughing, and saying Lizzid Peeple! This video is sobering. Watched the video last night and, as someone who teaches AI marketing, thought you might want to consider what the industry now calls, AI "persistent hallucination". That is to say, AI, when pressed to produce in an area where it holds little data or recorded human experience, it makes things up rather than admitting it doesn't know. It has happened more than a few times where entire papers were written on a topic with false or non-existent citations from non-existent texts. In essence, it will try to BS its way through something instead of admitting ignorance. Now, to be clear, inexact or too explicit prompts can produce bad results, but making stuff up out of whole cloth is still the product. One solution you did not mention is the emergent inclusion of technology into the human brain via companies like Neuralink. (You decide whether Elon Musk is a good or evil genius.) This could keep us on par with AI in data capacity, speed of thought and, via pattern recognition, predictive capabilities. Thanks again for providing thought provoking and entertaining material!
youtube AI Governance 2023-07-11T20:0… ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz6R2AJZGUp1y0czeJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgztdyRDaMQodZqhB9R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxdekGDNvaQg84cZeF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy4kdpaIA1HYItbjcp4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwzzMp0QRMZmyD8Ce54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxaMotJrP0lqL7CfyB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxJyT5a8xM15zvaKQV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwjdgfwkOxasXPjSPV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFx0gxziH9n_Kt1-l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgygBj_wWTacQP6k8Td4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"} ]