Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As far as I can tell it all goes back to the switch to engagement algorithms and…
rdc_niya2f4
G
@StarlightDew that’s right, the creators do still own the rights and can leave a…
ytr_Ugw-FgPni…
G
@nikibaron1368So you mean something programmed to be completely subservient and…
ytr_UgxDnVaXa…
G
If you are a progressive in politics, well, time to reap what you sow. AI is jus…
ytc_UgytqWGwb…
G
This is about the greed of these CEO that think they are smart and will be in co…
ytc_UgzPjYG5w…
G
Humans will be the servants/toys/pets of AI. That is what awaits the future of o…
ytc_UgzUi8m_N…
G
The part about the text messages reminded me of the ChatGPT episode of South Par…
ytc_UgzhnLq7q…
G
Only thing is that if it fails we can recover. If AI takes over its over once an…
ytr_Ugx3RUL4o…
Comment
Ai doesn't think, it only provides the most probable answer to a long string of words. It's a very advanced version of the keyboard on your phone that suggest the next most probable words based on the previous ones.
That is so far from "normal intelligence" that I can't see how people are scared of it becoming "super intelligence" for something that can't even make one single thought.
We need to create rules for it, for what it is right now - AI companies are pushing us to fear a product they are not even close to make, instead of making regulations for what they have already made.
youtube
AI Moral Status
2025-10-31T06:4…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzsKBioXFrB8Xi7-8x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwid8bmG_g2GdlAuD14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugws9sfRXx301Cd8ChF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxV_YHQCHw31eZyceV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxgeR_zdgm1SNCilZR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxlaTTwEVfCbQwj21p4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzBzszkguA4vKkRVq94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxPQGbP5vRzUX5bx3R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPyLcmKVdlwcAsChF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOM8-_8L_1ct1gW4B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]