Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Reality is this, all jobs will become automated and guess what you will get fire…
ytc_UgxyLMCkI…
G
Actually being very rude to GPT-3 at least tends to produce better results. Bein…
ytc_UgxB_NgA-…
G
@MazzeKasurame We’re closer than you might think, you might’ve heard of G.A.I, b…
ytr_UgzlGzgce…
G
- Ilya Sutskever (former OpenAI chief scientist): Stated that today's large neur…
ytc_Ugzfch1_1…
G
AI while learning to imatate Humans needs to be led like a bot to have purpose a…
ytc_UgwpRUmvY…
G
I'm thinking how does it know who to learn from without having levels of importa…
ytc_UgyoJ55WG…
G
This guy is legit as fuck. His argument isn't about whether the AI is sentient, …
ytc_Ugzi0pjMm…
G
@goofballbiscuits3647 I'm aware. Guess we really don't know how much, if at all,…
ytr_UgxMKFs0j…
Comment
Another "what if"... What if in the future, AI will not require humans to work or do much of anything. It will work for us. But then what? Humans are wired to have purpose. If we don't need to do anything, then what is the point? No need to water our lawns, no need to make food no need to even raise our children. When we defer to AI (robotics, agents, etc.) then what will we do for life?
youtube
AI Governance
2025-09-04T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgweJhuDRG8hzGnYpW94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKjz5w0ijLeKVpgpB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyW9mPbjk1iDsasGRR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzWZvHc1QUQplZL_YZ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwmievZBQ5wo4maJHR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzmraWXKXjpV0eYgIZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyJnOB0THp_tC5r3Vt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzGe5EWsEQKwkhAas54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzzV_cfdmeWrYV06HR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0MKKtzUni3DSRcYt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]