Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Rail companies have been working on positive train control aka PTC for almost tw…
ytc_UgwPHOefv…
G
Working class are workers, as in laborers, as in with hands. AI is not a threat …
ytc_UgwmyJtw_…
G
I think you missed the point about the AI portion of the contract.
It’s not abou…
ytr_Ugy15_7Qh…
G
@LainorLean go to deviantart>topics>AI art and scroll through for a few minutes.…
ytr_UgwLAg0cl…
G
The mind of Man is limited . AI can only advance according to the INFORMATION fe…
ytc_UgwjSD_gW…
G
The problem.With robots, they're going to be like electric cars. Can you find th…
ytc_UgwebSOIC…
G
I was against AI until I caved and Chat GPT gave me the cute fanart I can't make…
ytc_UgwekXr7c…
G
Claude already generated it's own language to converse with other claude bots. S…
ytc_UgygWj-8_…
Comment
I don't think AI will work like that, a computer cannot just expand infinitely on itself. Your forgetting the AI hallucinations, when an AI overextends it messes up without realizing its messing up. Even super intelligent beings will have to have.... hmm... their "minds" contained, or they will face what we feel as madness.
This is just my suspicion but I do think we will find out true intelligence is not possible without a physical form that gates the amount of information a AI can take in, like the context parameters in current LLMs.
youtube
AI Governance
2024-06-02T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzuyfz_sQ-pymJOWXV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw-LYQOhZLY-hdBXWh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw7cIft2gE3J3Slugx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz6DwkKvjjXUWzvCMV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz5aPDaQIkNhxGUyLt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy7V5BOlgCnZG2NDqZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPw4nMxedrI_7yZhp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwYAOlahszwq7NQa1t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgySPEuGE3PKxNBPMaZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqVnNx2LNMYYQwYhx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]