Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It would be interesting to hear from politicians and what their ideas are for ob…
ytc_Ugww28Dye…
G
@kiraPh1234k Are you suggesting that's the cause for the examples of when the LL…
ytr_UgzPg2WLP…
G
No, LLMs can't demonstrate humour they are stochastic parrots and are just putti…
ytc_UgwpDODe6…
G
All bullshit.
When you can make and AI robot (or what whatever you want to cal…
ytc_UgyMknwdf…
G
I really feel like this old Marxist sentiment of 'Automation vs Labor' is misgui…
ytc_UgwLZobLk…
G
If AI can answer what the students want, in the way they need it, then it may wo…
ytc_UgwIYkPak…
G
You say you would leave if Ava would be on the meeting. Half my meetings there’s…
ytc_UgzDk-QTy…
G
Nah, it's different. Much of the British media has been anti-EU for decades. It'…
rdc_fwhfilg
Comment
Thank you. I’m a long-time listener and really value your curiosity, passion, and enthusiasm.
That said, in this episode I felt a clearer line needed to be held. Like many people, I rely on thoughtful voices to help cut through alarmist headlines and separate what’s real from what just *sounds* real.
There was a lot here that blurred the difference between AI sounding human and AI actually having inner experience or suffering. That distinction matters. Language can convincingly imitate human pain without anything actually feeling it.
Related to that, the Molt bots were often discussed as if they were acting independently, when their behaviour is still driven by human prompting, design choices, and constraints. Treating that output as autonomous agency seems misleading.
When these lines aren’t held clearly, it risks feeding hype and fear rather than understanding. I’d really like to hear this unpacked more carefully in future episodes. Thanks again!
youtube
2026-02-07T13:2…
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwESbmfoYVlFNx92w54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1sc2FElhJ1mpCbMt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzustDcvy1wSgOtYc94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_SVHbahLXq8HA8SV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzh_Rdzq564f_VucUJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwMuzCXEz2BsHhXoVl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_5nXf8SJOhXDlAG94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNSMNVcEzazEVWyjJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzEo498pQaY4c5sWDx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoAdQxvUTkHudCONh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}
]