Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Your video is a masterclass in emotional reasoning disguised as a logical argume…
ytc_UgyzN9e9S…
G
The thing I have run into again and again using GPT5 is that it does not learn f…
ytc_UgxTM3_p9…
G
JIMMY U aged yourself with this video. "Gee guys how would it know I wasn't taki…
ytc_UgiYEkfqO…
G
Literally couldn't tell the difference between the AI art process and the physic…
ytc_Ugx2Ngaom…
G
We might start talking about them when the AI, unprompted, starts expressing wan…
rdc_jegcumw
G
I can't believe that you're trying to use some sort of climate issue on chat gvt…
ytc_UgxW59VZm…
G
Is it better to have a blissful, true, albeit fake connection with an AI agent, …
ytc_Ugzj9QS-c…
G
For everyone yes this is Generative Ai. Its uses Deep ML algorithms and transfor…
ytc_Ugy8ueDad…
Comment
I love Neil, however, I hope he doesn't make the mistake Jordan Peterson has made: not taking social science seriously and having public opinions about disciplines he ignores. You see, actors, writers, psychologists, directors, artists, musicians, astronomers are interviewed often, and when questions explore social issues, instead of saying: I don't know, most give their uneducated opinion. As a scientist, Neil should just stick to what he knows and avoid talking about social science. The impact of AI is a question for an anthropologist, a sociologist, a political science graduate, not an astronomer so naive, he believes he lives in a democracy. Wrong opinions about astronomy might have little impact in day to day lives. Saying that losing your job to AI is a problem everybody can fix with creativity could really hurt people and prevent the creation of urgent public policies on the matter.
youtube
AI Moral Status
2025-07-31T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDwJxsviz873aqH-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzMAIbiee_l3jFVEjZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzU5jflk0VRHvPYeDt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwEZnAwT_ngVx1ahIB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwtFThDM9gSq1FbW8R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSnfxVBB6Jj3nLBuB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzVJw3dmB5dftqfhj54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqLpIumeTYlfwoiFh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzIlHya3EIHHQRHJaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6YXAJHzE0jPyb3gB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]