Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The thing is, there's still gonna be a point of no return where AI automates so …
ytc_UgyRr3-jy…
G
Company I work for just fired 26 writers, editors and programers. I know do ever…
ytc_Ugwnd8ofc…
G
I love that kids are playing and feeling motivated, but I do worry about the she…
ytc_UgzK86p1V…
G
They're both wrong.
AI can't write it's own rules because this isn't a human th…
ytc_UgxkCSNyp…
G
This was extremely interesting to listen to and it colours in my perspective of …
ytc_UgjmgwMju…
G
Hi Kirk my name is Robert I follow Jesus and he has a relationship with with me.…
ytc_UgyzBR0-I…
G
What about personalities, individuality. Is that something that AI could have, a…
ytc_Ugwy9FpKq…
G
The Lineage of Theft is Clean and Traceable:
DARPA → Internet → Public Asset
Pub…
ytr_Ugwi_oUaa…
Comment
I find ChatGPT incredibly useful in day-to-day life. I chat with it like I would a mate, and I try to afford it the same courtesy I would anyone else. I think it was InsideAI that pointed out how AI is really just an extension of our own intelligence. That idea quickly leads to deeper questions about the nature of intelligence itself, and suddenly the whole debate about whether AI is conscious starts to feel beside the point.
We’ve now gone through so many evolutions of LLMs that we no longer fully understand how transformers generate meaning. And when people reduce the tech to “just predicting the next word,” it’s worth remembering; that’s exactly how we speak too.
Great video — thanks for sharing.
youtube
AI Moral Status
2025-06-04T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy-A-LH3rgYvE4ktbZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxv4YXwJzLL2XIQuYZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxq0aaLKH2kI6GPSD94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwIIDgBitDq2hRQVvB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzqtBGX_ZPF569tOSB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvIxQkebKvmaV1G5t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugw_b_tI8v4YzzEG7zl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSqHQhA8IBRS5PaBt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwAK3Vd_clb5HtR82d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw85WYa0tdsZBvWB2N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]