Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hallucinations: I added a saved prompt into Copilot's master list - along the lines of *"<myname> wants factual data unless he explicitly asks for fake or fictional data."* _(would "I" work?)_ And Copilot responded that it understood that the definition of winning would be to give a factual answer or "I don't know" / "I don't have that data" unless I explicitly authorize a fictional response... I was in the process of adding the same into Gemini, but it wanted to build an exclusive contract about when and where and how... and grew overly complicated... So, I've not implemented the prompt in Gemini yet... But make sure you use the word "winning" as that should trigger the result expectations. Imagine you ask Gemini to write a short story (fabricate data). Imagine you ask Gemini to summarize a spreadsheet (don't fabricate data). Per Gemini, if it believes the definition of winning is that _a result is expected_ .. then it fabricates a result to win. If you tell it that it cannot fabricate data unless explicitly asked... that maybe stops "hallucinations"... Both Copilot and Gemini said it would. I'm only less than a week in, so no proof or not.
youtube AI Responsibility 2026-01-13T14:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyt0lvCGjAS8XXiQuV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw43gWbkV7n7TkZWa94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwgwZIp5RYHKecEFhB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw0EjCY6IigY2r9j3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxu0ZC1PynFIZvPTsN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyxC0A8pYtC-6NMKZB4AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzdASnYK7jAowTbgXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy39yyrzWs6gYtelQh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwHe25Fk63q4n9wLWl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwmPY7qWvaRtBXhwdd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]