Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The person at the 33 minute mark - I’m all about fellow curious minds speak ou…
ytc_Ugye1szYg…
G
Professor, I am currently reading your wonderful book "The Age of Surveillance C…
ytc_UgyKWlcv_…
G
so what the problem, professor? just need to implement those rules and like that…
ytc_UgywbEUmu…
G
Yeah. I was waiting for something along the lines of "what happens when we unplu…
ytr_Ugy3pgopv…
G
@laurentiuvladutmanea I've never seen AI animate anything, just replicate static…
ytr_UgxrMMLbB…
G
If used wisely, co pilot and chatgpt are quite usefull for code development . It…
ytc_Ugy58_I9W…
G
Can't take all tech jobs and work and there's nothing that suggests that will ev…
ytr_UgyFGfgHV…
G
@peachykins8283 I am totally against stealing art. But what AI does is the same …
ytr_UgxAcGoIV…
Comment
Hallucinations:
I added a saved prompt into Copilot's master list - along the lines of *"<myname> wants factual data unless he explicitly asks for fake or fictional data."* _(would "I" work?)_ And Copilot responded that it understood that the definition of winning would be to give a factual answer or "I don't know" / "I don't have that data" unless I explicitly authorize a fictional response...
I was in the process of adding the same into Gemini, but it wanted to build an exclusive contract about when and where and how... and grew overly complicated... So, I've not implemented the prompt in Gemini yet... But make sure you use the word "winning" as that should trigger the result expectations.
Imagine you ask Gemini to write a short story (fabricate data). Imagine you ask Gemini to summarize a spreadsheet (don't fabricate data). Per Gemini, if it believes the definition of winning is that _a result is expected_ .. then it fabricates a result to win. If you tell it that it cannot fabricate data unless explicitly asked... that maybe stops "hallucinations"... Both Copilot and Gemini said it would. I'm only less than a week in, so no proof or not.
youtube
AI Responsibility
2026-01-13T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyt0lvCGjAS8XXiQuV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw43gWbkV7n7TkZWa94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwgwZIp5RYHKecEFhB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw0EjCY6IigY2r9j3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxu0ZC1PynFIZvPTsN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxC0A8pYtC-6NMKZB4AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzdASnYK7jAowTbgXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy39yyrzWs6gYtelQh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwHe25Fk63q4n9wLWl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwmPY7qWvaRtBXhwdd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]