Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Terrible but... Everything should come eith a serious warning label. Same goes f…
ytc_UgzYInsLp…
G
I think the part where GPT tells you "I didn't say this to that person" it relat…
ytc_UgwmqZlqR…
G
This video has no audio via bluetooth headset. Other videos work fine. I got a f…
ytc_Ugy_PYfRX…
G
I can't believe this idiot dave is arguing with chatgpt. He still doesn't unders…
ytc_UgzKihR5J…
G
I love this video. This is how I am both excited and concerned about the ongoing…
ytc_UgzqkoPbm…
G
Thé title is completely misleading and sounds as if this is for the entire count…
ytc_Ugy6PriLB…
G
Are they going to make robots of color or are they worried about crime? This is …
ytc_UgxGraaFx…
G
Answering the questions truthfully doesn't matter all that matters is the kind o…
ytr_Ugh0i9klZ…
Comment
Oops… recent data from industries that have adopted AI…. It hasn’t increased productivity because it cannot be trusted to be 100% accurate. And, oops you have to have a person verify it all cause you don’t know where the 10% of hallucinations will be located. LLM which is the only AI working model has a baked in bias for fluency over accuracy. Think about that…it will lie if the lie is more esthetically pleasing. You do not want that doing your filing! Or interpreting your latest medical procedure. LLM is being quietly removed from most applications because it doesn’t have the reliability required for most jobs. But..creating pictures? Great. Writing decent prose? Great. Handling facts…not so much
youtube
AI Governance
2025-10-03T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0-y-hREOS9YQLiaN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw8HLJqLCWLI9STEZN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzGLAuHy-JBVmEECc94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxxVI0NtqaA59MikKZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy3JSFzKB9oMvK4ePd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz2ZCaKC9Ma8rOmrVt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxUX0YfgniL1Pvz17N4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgykTdQkwlw7IEFKbC14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlTm6ntMyvqiHBg554AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxW8S473hmWW_IBL_B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]