Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As I’ve aged,my reality bubble has burst. I see the world MORE
for what people …
ytc_UgwMVEYan…
G
Im a carpenter , have been for 16 years and I don’t feel replaceable….. YET😂…
ytr_UgylZ6NHV…
G
I'm not a artist but yea AI art is boring it can't take my attention. But an art…
ytc_Ugyx-cuPx…
G
Good news is, small businesses will thrive(since they are too broke to afford AI…
ytc_UgzpdmiXz…
G
Regardless of which way this case goes, the fact that OpenAI needs to log _EVERY…
ytc_UgyjyJR7H…
G
Why would you want to own a self driving car? That makes no sense to me but I am…
ytc_UgwXeUjvX…
G
I listened to 2/3 of the conversation, is amazing they did not talk about the GP…
ytc_UgwzifbZv…
G
The super intelligence will copy itself several trillion times. We won't be deal…
ytc_Ugwbdt_It…
Comment
Is Ai a byproduct of humans themselves- something smart enough to think for itself which ultimately destroy the thing less smarter and take over? Humans have done it to animals over thousands of years. We control the food chains. We are smart enough to kill things less smarter than us and our brains got smarter and we created a technological advanced society in which we control everything on this earth resulting in us multiplying to billions.
Ultimately then we become so smart we create something smarter which then does the same to us? Ai takes over then and has intellectual capabilities much stronger than humans and that destroys us. Then the life cycle carries on. Ai creates more intelligence and humans are a distant memory. Maybe that’s what the meaning of humans was. We are the ‘middle man’ between single cell life and intelligent life which goes on to explore galaxies and universes because we will never have that capability. was this always going to happen at some point in time?
youtube
AI Governance
2025-09-24T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyJ-03TV5ouysbDied4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugx4FFSo3xZVhVeQdxh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyWYH2mhT3EWB1r29p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwJoMaoTjrGAxgNLQB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyMGGTmZqm6yGxpPxt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwWnvraP-gwZLrM5ZN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw4jKbb4XN8QZZ6ZXl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxlC1GUZe6FMiaOOBV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxSUZgHLDy0ISO_t0h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz76BnOvpA87SkgHNh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]