Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Logline:
"ROBOT SOFIA'S HUSBAND"
The engineer, who regrets killing his wife wh…
ytc_UgwvWzfwO…
G
For me, as a university student I only use chatgpt to help me study or help me u…
ytc_UgxdeKVIA…
G
Does AI have rights? You know she will be taken and tied up if you know what I m…
ytc_UgxNFg9o3…
G
Emad Mustaque @ 14:20: "... and if AI can help mitigate those lies...." ( we tel…
ytc_UgwgIa9U2…
G
As long as AI is decentralized...human ingenuity will explode. AI can provide th…
ytc_Ugxg7O73m…
G
We will one day explore the galaxy, but it will not be the biological us, but ou…
ytc_UgwexX9qs…
G
A calculator that get things wrong 15% of the time is still a bad calculator, no…
ytc_Ugx6el1ws…
G
Relax people it will not happen the reason everyone fears this because satanic w…
ytc_UgzLzgLAt…
Comment
I get the impression it's mechanically impossible for AI to have consciousness, experiences, and/or proper inclinations. I mean, when I have a conversation with a simple AI, the way it works mechanically is that my initial statement or question is turned into tokens, those tokens get passed to a bit of code that uses the statistical model to predict the next word repeatedly until it hits a STOP. Want to make it a "reasoning" AI? It looks like we would just need to add another loop (with some extra text instructions) in the code and have the prediction piece through to STOP 3/4/5 times to extend the context of the conversation before generating the response that gets sent back to me. And if I don't send another statement or question, the AI would never generate another response.
If we can ever get an AI that more or less never stops generating "internal" predictions and can spontaneously send texts to connected users maybe we will get super intelligence, but I'm not sure we can meaningfully create super intelligence.
youtube
AI Moral Status
2025-10-31T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyUSZEt_D_L-srdtY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwAR-miK3McSNbQPlh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwsUjPct9PdMZ4XAVV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwyUpBv84xu5HK-UJ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwP7jRZYjiOlpH3Ve94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyAxxpxkUA1pNFS3IF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0bCq7miXbvb3zCFR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyTcGoRQ6hE812SaF14AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzoyNSQG_OyUFPjpMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz_mNWRN9AgxSfaC994AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]