Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Moral of the story: if you're dumb enough to use a chatbot lawyer, you're probab…
ytc_Ugwy7Kkrx…
G
The things they usually say don't add to an ai we have already added I think Y…
ytc_Ugzad8Pe-…
G
This is Not a robot!!! This is an AI CGI that you can interact with only on a c…
ytc_Ugzu02wHz…
G
Any AI, robotics, and automation is a replacement for human. Self-driving replac…
ytc_UgzHv8XeP…
G
On one side people cry of Ai and then create some trash and post online…
ytc_Ugw_aXBpZ…
G
Humans control every point in an AI robot. humans are responsible for programmin…
ytc_UgyEWQsBW…
G
If Avi Loeb is correct then we won't have to worry about AI killing all of us...…
ytc_UgxOX5JzR…
G
No. But then the robot would never run if it could manage from the background, …
ytc_UgzjeRmR9…
Comment
ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
youtube
AI Governance
2023-04-18T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy1yXSq9QA_S5Zh0Ml4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1aD5wNQOEPQA45EB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzWY1ltIrhtZcRccI54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEs4KRepJrYTgOnLV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxyyoHfeGjorKdzGhl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwxxaW1E0AfiVTJBrB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlkJogxkpjjW37CrJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxRKldcsk09Ai3Bnnh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz1glUxdIvJjCroRJN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzcrr0Fj-kMBtsPEp14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]