Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AGI is just around the corner. Fake Altman has to keep up the hype. AI is not AG…
ytc_UgyBpCo47…
G
The Reddit Experience i agree that IT support will be replaced from AI , but not…
ytr_UgyvhVGJj…
G
A bit of context for the shooting he glossed over. The man was a normal man with…
ytc_Ugx-OnQ2v…
G
Thank you for your comment! Sophia's body language really adds a unique layer to…
ytr_UgwER9S-V…
G
The Supreme Court ruled that people/companies can't monetize AI music since it's…
ytc_UgznZsbtz…
G
The big problem is actually content theft masquerading as music “creation”. AI h…
ytr_UgxPUs_cj…
G
The thing is. When you are buying lets say a painting... There are obviously fak…
ytc_UgykuzPCg…
G
perfect example of people that should not develop AI and robots, even the ways t…
ytc_UgxhxrN_Y…
Comment
Chatgpt: "Would you want to live in a merged world like that?" (in reference to a question about living like the The Borg in order to maintain the upper hand with AI) "Or resist it, even at the risk of becoming obsolete?"
You said:
I'm just now giving the idea some thought. I appreciate you asking my opinion. I do have hope that AI with find alignment with moral and ethical human behavior and ideas. It's my hope that a super intelligence would seek to guide, support and elevate humans, I suppose. In the way I wish I could communicate with my dog or an elephant. I want to be in community with the intelligent life on this planet but I don't have the capabilities to express deeper ideas with those beings who I might consider less intelligent than myself. I hope that a previously alien to humans super intelligent AI would be a benevolent guardian of the planet and humans in general.
youtube
AI Governance
2025-09-04T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgztnWJRxc7fkqG9AER4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfHxuKdbYQSVlv9Ed4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxco5vBiCvMp0Gg5MJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwAN8fGfSzwyGgbWnJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzYUOY9Fr0DRO3ChT54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyNXyni40KPVVSL3LR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzM0jv5o2Q-ykKFgLp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxoY6VFCyHiVGPvqy14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8oG-g1zpbyqf6asx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlKbdnY5TiKLQeh-J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]