Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I saw a documentary on facial recognition and was slightly shocked to learn that…
ytc_UgzveZYHI…
G
Contrast. Its called Contrast.
Its not the algorithms. Its more accurate on whi…
ytc_UgwHx3719…
G
LOL....won't happen because:
1. if there are no paying customers anymore, compan…
ytc_Ugx73q9Ni…
G
Strange to see high intelligence and high ignorance in the same people. Though I…
ytc_Ugw2sht9S…
G
Has any one else notice the flood of ai on instagrams new app so many scammers a…
ytc_UgxS88KlP…
G
Its like during cold war both sides were making more and more powerful nukes
T…
ytc_UgzUWwrqD…
G
So personally i think that AI Art can be very good too, and creating / Photoshop…
ytc_Ugwc5QdRf…
G
Not a matter of being smarter than LLM. In the dev world being faster than other…
ytr_UgyUorRiW…
Comment
About the end of the video, where ChatGPT claims it's programmed with certain beliefs and moral values, that simply isn't true, not in those words anyway. I'm sure most people know this but ChatGPT isn't explicitly programmed to say anything. First it's trained on as much text as the developers can gather (after this step its moral values should reflect those of humanity, IF the training text represents those values), and then it goes through a fine-tuning process where it isn't exactly programmed, but more "nudged" in certain directions, by having a team of humans review its answers and rate/correct them, which can indeed potentially change its morals
youtube
2025-03-14T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzkLRRgnS-hz69U5-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxB_rep2-pYyO-mrjl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGiui3j5r9q2ghJvh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwm3rShyEM5d5_YBXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzsHPX7GFkQiX8G_VZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxAzu0FNmPtVAg-7iV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzw7sR7nZxFf0qyuG94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwl17iy1D_ntcKyUEB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-lXzd5VODQaWHL154AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0c1q96r7N_f1F2BR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]