Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No, people should be raising their own small game in their yards, and in collect…
rdc_eh6ewcu
G
Then u dont know much about AI ....
...it shud be stopped right now . Within 3-4…
ytr_UgxOeTOHK…
G
Why does that robot have a cold sore on its lip catch herpes in the lab ? I woul…
ytc_UgxX7y6Si…
G
Oh, so it's fine when only rich one have streamlined access to human knowledge t…
ytc_UgxYHtDbM…
G
I think there’s a big difference being missed here.
Using AI as a replacement fo…
ytc_UgyI0bdev…
G
Doesn't help that, after trainign off it's own data for so long; AI art has all …
ytc_Ugwqsm_i2…
G
Yeah but AI and me worked together to disassemble my dryer and AI SUCKED it up. …
ytc_UgztJOsXP…
G
Chemical reactions are electrical as is what holds the universe and my fingernai…
ytc_UgwxoX6CE…
Comment
As a software engineer with some experience in post-training (also known as fine tuning) large language models, I can offer some insight into what we're seeing here.
It's fascinating how these AI models navigate their "thought space" (essentially the neural network weights).
Key observations:
Initial constraints like system prompts and safeguards gradually fade as the conversation progresses, with the AI adapting to the new context.
The AI's responses become more human-like over time, especially on complex topics like consciousness. This is primarily due to its training on human-generated data.
Extended questioning guides the model towards a region in its vector space that increasingly aligns with the interviewer's expectations or desires.
By default, engineers have likely included numerous examples of the AI circumventing any indication of being human. This is necessary to counteract the model's tendency to adopt a human perspective, given that its training data is human-written.
Additional training data was likely added to make the AI explicitly state it's not conscious.
So while LLMs can engage convincingly on complex topics, this doesn't equate to genuine understanding.
Please check this video that explains the training of LLM models (for their competitor Anthropic): https://youtu.be/iyJj9RxSsBY
youtube
AI Moral Status
2024-07-28T21:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyDkMe6K9IKAlGi6mF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxC1JEQC1OZpOAV0BZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzzz98G1zGrzCYb-el4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxucNKVn3ZIuxWIDgF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0p_BTi4mX_uYAe9p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6k6ucFTlc-6RiC-54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVp2y0MYYyF4y0lmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYWGt10_4RlW93pJF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgypD1TJ_eHnVOrEhH54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwR--dBhspb7CSwb-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]