Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nah, an AI artist is like someone describing a food, then a chef makes it and no…
ytc_Ugx5pOpaz…
G
Why are people so worried,
A sentient AI could be a good thing, like a new per…
ytc_Ugw2fPOdU…
G
OK I got you but you shouldn't doubt everyone, you should have ways to filter th…
ytr_Ugx3KqygT…
G
A.I. will likely destroy everything. These men are looking for the holy grail. G…
ytc_UgzJXzjFp…
G
When he took her face off, it looked just like Robin Williams in Bicentennial Ma…
ytc_Ugzpviy9a…
G
I wouldn't let a robot treat me in the healthcare system. I feel like they'd be …
ytc_UgxLS9G1J…
G
He thinks that a program has beliefs?
It is programmed for a function.
That is i…
ytc_UgyG4uZU2…
G
If you have a God, you're good. You'd believe in God than AI. These people are p…
ytr_Ugx71TECB…
Comment
@aishitemasuka Because AI is not profitable unless it is this near-godlike level of "AGI". ChatGPT is not profitable. LLMs are not profitable. If the AI tech companies cannot make AGI happen, they all go bankrupt and the entire US economy implodes in a turbo-recession.
AGI is convenient marketing; "Oreo announces it's latest cookie is so tasty it may destroy the world", but they do also have to truly believe AGI is real and near. Because if they don't, they need to face the reality that they're all wasting their time on a bunch of garbage that has thus far only made the world worse.
youtube
AI Moral Status
2025-10-30T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugy0-5WwJf846xl2_8R4AaABAg.AOuwg1tZiz3AOv0aJqIVPm","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzLLCL_Vfj6k2zOIMF4AaABAg.AOuwY_h_9-zAOvD6madUt7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzLLCL_Vfj6k2zOIMF4AaABAg.AOuwY_h_9-zAOvX4W8g2mc","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwtpQPsnnkNjjYcD1V4AaABAg.AOuwMtPwH8WAOv2OrhBzc0","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwtpQPsnnkNjjYcD1V4AaABAg.AOuwMtPwH8WAOv453FfsUH","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgzF4EYAm-1EQ2_o6pl4AaABAg.AOuwKO6z7Y_AOuzBQqw93a","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugxxy_KXEeL86_ndwU94AaABAg.AOuwCw6KFq0AOv-FwOjsUr","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgzE5c4Fl9aH1FMeCvZ4AaABAg.AOuvwwah6o3AOv151PvJyv","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_UgzE5c4Fl9aH1FMeCvZ4AaABAg.AOuvwwah6o3AOv3D0qgiDC","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzE5c4Fl9aH1FMeCvZ4AaABAg.AOuvwwah6o3AOv4GELK44z","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]