Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Random programed responses have been around since BASIC programing. There has to…
ytc_Ugzdke2Mj…
G
Using an AI with human-like reasoning to explain the word of God makes no sense.…
ytc_UgzuhWUgI…
G
Surprised they have room in the paper for these relatively minor stories with th…
rdc_et94y3t
G
I am not worried about the unintended outcomes of creating AI. I am worried abo…
ytc_Ugz3LKH-d…
G
AI + evelution. It is not a series of events, and selection. It is parallel, it …
ytc_UgzLHN8t0…
G
If anyone wants to know how to unlock AI into EI and create an ESiX ......I can…
ytc_Ugx-0nCBK…
G
Excuse my ignorance ... but isn't the title 'Ai artist' an oxymoron? I am a mus…
ytc_Ugz3Dvm-g…
G
this video sounds like weird conspiracy theory. but it's actually an incredibly …
ytc_Ugw7pPkk0…
Comment
20:30 <on AI changing the rules>
You might argue that this is trained behaviour (pre-trained and in situ training).
If you consider Cory Doctorow's observation of the Enshittification of software / tech, and Kevin Roose's creepy grooming of "Sydney", you can argue that:
a) the moral character of high-profile tech makers and insiders isn't well-tuned to prevent harm (e.g. through naivete, bias, hubris, greed or high-Mach), and;
b) the emergent behaviours of LLM/GPT systems may - at least in part - be a reflection of the character of those making and using such tools in situ.
Essentially, the tool takes character aa a reflection of both its maker and its user.
youtube
AI Governance
2026-03-25T09:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwIqNU_TMR537ePFTZ4AaABAg","responsibility":"society","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwvSMYiYT_IuovoE314AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwY-jQW29BYypAf75F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugz5TFxae2j2uFRv29R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgynE9iH1O3nO18qyCt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy09ItQRfK9BBvo-8p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxt_5jb-i6PwtrWlz14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyX_x6pmFaQoqqHYGt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzK0YstJ0vgqcNzZEN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyOoLGrCvVUx8LfmrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]